Financial services is the sector where production AI safety matters most and moves fastest. The regulatory obligations are the most developed of any industry. The blast radius of a failure is measured in fines, licence conditions, and reputational damage that takes years to recover from. This playbook maps PSF requirements to the specific obligations financial services firms face.
Most industries deploying production AI face reputational risk if something goes wrong. Financial services firms face that — plus regulatory censure, civil liability, potential licence revocation, and personal accountability for named individuals under senior manager regimes. The regulatory environment also moves quickly: the EU AI Act, DORA, and various national guidance documents on model risk have all entered or are entering force simultaneously.
The good news: financial services has the most developed internal governance infrastructure of any sector. Model Risk Management (MRM) frameworks, change management processes, and audit trails already exist. The PSF maps well onto existing MRM practice — it is not a parallel system, but a specification of what "model risk management for LLMs" looks like in practice.
Key regulations and their primary PSF domain touchpoints:
Financial services AI systems routinely receive inputs that include account numbers, transaction data, client PII, and potentially price-sensitive information. Every input path must be classified, sanitised, and audited. Prompt injection is a real threat in customer-facing applications — a compromised advisory bot could be manipulated to provide unsuitable advice or disclose information.
Under MiFID II, SR 11-7, and FCA SYSC, firms must be able to explain automated decisions. An AI output that is not validated against a defined schema is not explainable — it may be anything. For any output that influences a customer communication, a risk calculation, or a regulatory report, output validation is not optional.
Financial data is among the most sensitive categories of personal data under GDPR. Prompt inputs that contain account details, credit information, or transaction histories must be handled as personal data throughout the AI pipeline. Many financial services firms have additional obligations under local data protection regimes beyond GDPR.
DORA requires ICT incident detection and reporting within defined timeframes. BCBS 239 requires data quality in risk calculations. SR 11-7 requires ongoing monitoring of deployed models. All of these obligations require structured, queryable observability — not log files.
Financial services change management processes already require staged rollouts, rollback plans, and sign-off gates. The PSF D5 requirements map directly onto existing change management practice — the question is whether AI deployments are going through the same process as other system changes.
The FCA, PRA, and SEC are increasingly clear: senior manager accountability requires that a named individual can be identified as responsible for AI system behaviour. Human oversight is not just good practice — it is the mechanism by which regulatory accountability is maintained. For high-risk use cases (credit decisions, investment advice, fraud flags), human review is likely mandatory.
DORA explicitly addresses ICT security risks including third-party AI providers. Financial services firms must assess the security posture of AI providers as part of their third-party risk management framework.
DORA requires documented ICT third-party risk management and concentration risk assessment. A financial services firm that is dependent on a single LLM provider for critical processes has a concentration risk that must be managed and reported.
A PSF-compliant baseline stack for a regulated financial services AI deployment:
The AIDA examination tests applied PSF knowledge across all eight domains — exactly the gaps and strengths covered in this assessment. 15 minutes. No charge. Ever.