The professional standard for production AI deployment
Verify a credentialFor organisationsPartner ProgrammeFor nonprofits & NGOsContact
Governance2026-05-09· 14 min read

AI Governance Frameworks Compared: PSF vs ISO 42001 vs NIST AI RMF

Three frameworks dominate AI governance conversations in 2026. Each was designed with a different primary user in mind. Understanding which one fits your context — and how they interact — is the starting point for a credible AI governance programme.

Production AI Institute · 2026-05-09

Licensed CC BY 4.0

Summary

ISO 42001 is a management system standard — it certifies that your organisation has a documented AI governance process. NIST AI RMF is a voluntary risk management guideline with no certification path. The PSF is an operational technical standard — it specifies what controls a production AI system must have. Most mature organisations use all three: ISO 42001 for governance accountability, NIST for risk mapping, and PSF for deployment gates.

Why Three Frameworks?

AI governance frameworks proliferated because the problem space is genuinely multi-dimensional. You need governance at the organisational level (who is accountable, how decisions are made), at the risk management level (how you identify and treat AI risks), and at the technical deployment level (what controls a system must have before it goes live). No single framework covers all three layers with equal depth.

The result is that most serious AI governance programmes borrow from multiple frameworks. The question is not "which framework should we use?" but "what does each framework contribute, and how do they fit together?"

Framework Profiles

ISO/IEC 42001:2023 — AI Management System Standard

ISO 42001 is a management system standard, structurally similar to ISO 27001 (information security) and ISO 9001 (quality management). It specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system (AIMS). It is auditable, certifiable by accredited third parties, and carries recognised international credibility.

What it covers: Organisational context, leadership commitment, AI policy, planning (risk and opportunity), AI risk management process, AI impact assessment, AI system lifecycle requirements, documentation and records, internal audit, management review, and continual improvement. It also includes normative annexes covering AI system categories, potential AI use cases, and controls.

What it does not cover: Specific technical controls at the system level. ISO 42001 does not tell you how to validate an LLM output, structure a RAG pipeline, or implement a circuit breaker. It requires that you have a documented process for managing these concerns — it does not prescribe what those processes look like technically. This is by design: management system standards are intentionally technology-agnostic to remain durable across rapidly evolving fields.

Who it is for: Organisations seeking third-party certification of their AI governance maturity. Common in regulated industries (financial services, healthcare, critical infrastructure) where procurement teams or regulators want auditable evidence of governance. Also relevant for enterprises that want a board-level governance artefact.

Certification path: Yes — certification by accredited ISO conformity assessment bodies. Typically requires a Stage 1 documentation review and Stage 2 on-site audit, followed by annual surveillance audits.

NIST AI Risk Management Framework (AI RMF 1.0)

The NIST AI RMF, published in January 2023, is a voluntary guidance document from the US National Institute of Standards and Technology. It organises AI risk management around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Unlike ISO 42001, it is not a certifiable standard — there is no accreditation body and no conformity assessment process.

What it covers: The GOVERN function addresses organisational policies, culture, and accountability structures. MAP identifies AI risks in context. MEASURE develops metrics for tracking risk. MANAGE implements risk treatments. The framework also includes a Playbook (AI RMF Playbook) with suggested actions for each subcategory.

What it does not cover: Technical implementation specifics (similar to ISO 42001), and it has no certification mechanism. Some practitioners find the NIST AI RMF too abstract for operational use — it describes what to think about, not what to do. The framework is also US-centric in its regulatory references, though its core structure is internationally applicable.

Who it is for: US federal agencies and contractors (where it is increasingly expected), and US enterprises that want a government-aligned risk vocabulary. The GOVERN function is useful for organisations building their first AI governance committee — it provides a clear starting structure without the overhead of ISO certification.

Certification path: None. Some vendors offer NIST AI RMF alignment assessments, but these are not standardised and carry no official credential.

Production Safety Framework (PSF)

The PSF is a technical operational standard developed by the Production AI Institute. It specifies eight domains of controls that a production AI system must implement before and during deployment. Unlike the governance-layer frameworks above, the PSF operates at the system level — it tells you specifically what your LLM pipeline, agent, or AI feature must do, not how your organisation should govern it.

What it covers: Eight domains across the full production AI lifecycle: D1 Input Governance (prompt validation, injection prevention), D2 Output Validation (schema validation, safety checks, fallback handling), D3 Data Protection (PII handling, data minimisation, retention), D4 Observability (logging, tracing, alerting), D5 Deployment Safety (feature flags, gradual rollout, rollback), D6 Human Oversight (escalation paths, confidence thresholds, HITL patterns), D7 Security (authentication, authorisation, adversarial robustness), D8 Vendor Resilience (dependency management, SLA monitoring, failover).

What it does not cover: Organisational governance structures, board-level accountability, or management system requirements. The PSF assumes you have a governance programme in place and focuses on what your systems must do.

Who it is for: Engineering teams, AI practitioners, and technical leads who need to know concretely what a production-ready AI system looks like. Also used as the basis for PAI certifications — the CAIA audit credential certifies the ability to audit AI systems against PAI-8 controls.

Certification path: Yes — PAI offers practitioner certifications aligned to the PSF, including the free AIDA (deployment fundamentals) and the specialist CAIG (governance) and CAIA (audit) credentials.

Side-by-Side Comparison

DimensionPSFISO 42001NIST AI RMF
LayerTechnical / operationalOrganisational / managementOrganisational / risk
CertifiableYes (PAI certs)Yes (ISO accreditation)No
Technical specificityHigh — prescribes controlsLow — process-agnosticLow — guidance only
Regulatory alignmentEU AI Act (technical)EU AI Act (governance)US federal agencies
Implementation complexityMedium — engineers implementHigh — requires documented AIMSLow — advisory
Best forDeployment gates, engineering teamsBoard-level accountability, procurementRisk vocabulary, US federal context
Freely availableYes (PAI website)No (ISO paywall)Yes (NIST website)
Update cadenceContinuous (versioned)~5 years (ISO cycle)Supplemental profiles released ongoing

Where They Overlap

All three frameworks address risk management, but at different granularities. ISO 42001 requires an AI risk management process (Clause 6.1); NIST AI RMF provides a vocabulary for that process (MAP and MEASURE functions); the PSF provides the technical criteria against which system-level risks are assessed. They are complementary rather than competing.

On human oversight: all three frameworks emphasise that AI systems operating in high-stakes contexts need human review mechanisms. ISO 42001 requires documented escalation policies; NIST AI RMF includes human review in its MANAGE function; the PSF's D6 domain prescribes specific technical patterns (confidence thresholds, HITL queues, override logging).

On documentation: ISO 42001 is heavily documentation-driven (documented information requirements appear throughout). NIST AI RMF recommends documentation. The PSF requires documentation as evidence of controls — audit trail logs, validation results, incident records — rather than policy documents.

EU AI Act Alignment

The EU AI Act (applicable from August 2026 for high-risk systems) does not mandate any specific framework. However, conformity assessment for high-risk AI systems requires demonstrating governance, risk management, technical documentation, and operational controls — areas that map directly across all three frameworks.

ISO 42001 is widely cited as the most direct path to demonstrating the governance layer requirements. The PSF's D1-D8 controls map to the Act's technical requirements for high-risk systems (Articles 9-15). NIST AI RMF is less directly aligned to EU requirements but provides useful risk vocabulary.

For teams deploying into the EU market: ISO 42001 certification covers the organisational accountability layer; PSF compliance addresses the technical deployment layer. These are not redundant — you need both.

How to Choose

If your primary need is third-party credibility for procurement or regulation: ISO 42001 certification is the most recognised international credential. It requires the most investment but produces an auditable artefact that boards and procurement teams understand.

If your primary need is engineering deployment gates: The PSF gives your teams concrete criteria for what "production-ready" means. Start with the free PSF domain guides and use the PSF Compliance Analyzer to assess existing systems.

If your primary need is internal risk vocabulary for a new governance programme: NIST AI RMF is a good starting point. It is free, flexible, and widely understood in enterprise risk management contexts. Use it to build your GOVERN and MAP layers, then layer ISO 42001 if certification is needed.

If you need all three: A practical sequencing is: (1) use NIST AI RMF to build your initial risk identification and governance structure; (2) implement PSF controls at the technical layer as systems move toward production; (3) pursue ISO 42001 certification once your management system is mature enough to sustain an audit.

Certifying Your Team

Framework knowledge is a prerequisite for governance — teams that cannot read these frameworks cannot implement them. The PAI certification stack covers both the governance and technical layers:

Start with the free AIDA exam

The AIDA certification covers production AI deployment fundamentals including PSF framework awareness. It is free, takes 20 minutes, and earns a verifiable credential. A good first step for any practitioner working in this space.

Take the free AIDA exam →Browse all certifications

Production AI Institute · productionai.institute · Licensed CC BY 4.0