Production AI Institute — vendor-neutral certification for AI practitioners
Verify a credentialFor organisationsContact
The Standard (PSF)PAI-8 Organisational Governance
PAI-8 · Companion to PSF
Organisational Standard · 2026

PAI-8: The Organisational AI Safety Standard

Where the PSF defines what safe AI deployment looks like at the system level, PAI-8 defines what safe AI governance looks like at the organisational level. Eight controls. Four maturity levels. The governance standard boards and regulators are converging on.

Read the 8 controlsCAIAUD Certification →← Back to PSF
PAI-8 and PSF are parallel standards, not replacements. PSF (D1–D8) governs technical deployment safety. PAI-8 (C1–C8) governs organisational governance maturity. Both are required for a complete AI safety posture.
Read the PSF →

The 8 PAI-8 Controls

Each control addresses a distinct category of organisational AI risk. Assessment scores each control L0–L3 against the maturity model.

C1
AI Governance

Formal accountability for AI risk at board and executive level, with documented policies, decision gates, and a register of all AI use cases.

Board-level AI risk accountability documented
AI ethics/safety policy published and operationally embedded
AI use case registry maintained
Governance decision gates applied to all new AI deployments
C2
Risk Assessment

Formal AI risk assessment before any new system deployment, with risk tiering, third-party AI inclusion, and regular reassessment as systems change.

Risk assessment required before deployment
Low/medium/high/critical risk tiers documented
Third-party AI included in procurement risk process
Reassessment triggered by material system changes
C3
Data Stewardship

Provenance, consent, and lifecycle management for all data used to train or operate AI systems, including vector databases and fine-tuning datasets.

Training data provenance documented
Consent or lawful basis established for AI data use
PII handling procedures for AI pipelines
Data retention and deletion policies cover AI artefacts
C4
Model Validation

Pre-deployment evaluation against production-representative benchmarks, including bias testing and independent review for high-risk use cases.

Pre-deployment evaluation report required before go-live
Bias and fairness evaluation for high-risk use cases
Independent review for critical AI deployments
Post-deployment performance monitoring in place
C5
Human Oversight

Defined autonomy limits, operational override mechanisms, escalation paths, and documented human intervention capability for all AI systems.

Autonomy scope defined for each AI system
Override mechanism documented and staff trained on use
Escalation path for anomalous AI behaviour
Override events logged and reviewed
C6
Incident Response

AI-specific incident classification taxonomy, severity tiers for AI harms, response runbooks, and post-incident review processes.

AI incident taxonomy separate from generic IT incidents
Response SLAs calibrated to AI harm severity
Post-incident review required for significant AI incidents
Regulatory notification assessment for AI incidents
C7
Audit Trail

Decision-level logging for AI outputs, log retention aligned to challenge windows, and explainability artefacts for high-risk decisions.

Decision-level logging schema captures inputs, model, output, timestamp
Log retention policy covers legal challenge window
Explainability artefacts for high-risk decisions
Immutable audit trail — logs cannot be altered post-hoc
C8
Vendor & Supply Chain

Inventory of all third-party AI dependencies, vendor risk assessment, contractual AI continuity protections, and tested fallback capability.

Inventory of all third-party AI components maintained
Vendor risk assessment for all AI service providers
AI safety and continuity requirements in vendor contracts
Continuity plan for critical AI dependencies tested annually

Maturity Levels

Each of the 8 controls is scored L0–L3. A PAI-8 assessment produces a per-control maturity score and an overall governance posture rating.

L0
Unprepared
No formal AI safety controls. AI is deployed without governance, assessment, or documentation. Material risk of regulatory, reputational, or operational harm.
L1
Basic
Controls are documented but inconsistently operational. Policies exist on paper but are not embedded in decisions. Evidence is sparse or absent.
L2
Managed
Controls are operational with documented process, regular cadence, and evidenced application to real decisions. Suitable for most regulated environments.
L3
Optimised
Controls are continuously improved with metrics, benchmarking against sector peers, and board-level governance review. Industry-leading posture.
Not sure where your organisation sits?
Commission a PAI-8 Assessment →

PAI-8 and PSF: Parallel standards

The two frameworks address different layers of AI safety. Neither replaces the other — a complete AI safety posture requires both.

PSF — Production Safety Framework
Technical deployment layer
  • What the system does — inputs, outputs, data, monitoring
  • D1–D8 technical domains with 40 criteria
  • Framework tools and stacks are assessed against
  • Certifications: AIDA, AIMA, CPAP, CPAA
Read the PSF →
PAI-8 — Organisational Governance Standard
Organisational governance layer
  • What the organisation does — policies, oversight, audit
  • C1–C8 governance controls with L0–L3 maturity scoring
  • Framework organisations are audited against
  • Certification: CAIAUD (Certified AI Auditor)
CAIAUD Certification →
Domain mapping — PSF ↔ PAI-8
PSF D1 Input GovernancePAI-8 C2 Risk Assessment
PSF D3 Data ProtectionPAI-8 C3 Data Stewardship
PSF D4 ObservabilityPAI-8 C7 Audit Trail
PSF D5 Deployment SafetyPAI-8 C4 Model Validation
PSF D6 Human OversightPAI-8 C5 Human Oversight
PSF D7 SecurityPAI-8 C6 Incident Response
PSF D8 Vendor ResiliencePAI-8 C8 Vendor & Supply Chain
PSF D2 Output ValidationPAI-8 C4 Model Validation

Work with PAI-8

Certify as an AI auditor, commission an independent assessment, or study the standard before the exam.

CAIAUD — Certified AI Auditor →Commission an AssessmentStudy Materials