PAI publishes original research on production AI deployment safety — incident analysis, regulatory mapping, framework evolution, and practitioner survey data. All publications are freely available.
A survey of production AI deployment practices across 200+ practitioner organisations. Covers incident rates, guardrail adoption, human oversight patterns, and PSF compliance self-assessment. To be published Q1 2026.
Maps PSF domains to EU AI Act obligations for high-risk AI system deployers. Covers conformity assessment requirements, technical documentation standards, and human oversight obligations under Article 14.
Analysis of 47 documented production incidents involving large language models. Identifies common failure modes, root causes across PSF domains, and intervention patterns. Anonymised case data from PAI community members.
Examines what constitutes meaningful human oversight — as distinct from rubber-stamping — in high-stakes AI-assisted decisions. Includes design patterns for effective human checkpoints and common failure modes.
Documents the reasoning behind each PSF domain, the alternatives considered, and how practitioner feedback shaped the framework during the public comment period.
PAI maintains an anonymised incident registry contributed to by certified practitioners and CAI-recognised organisations. This data informs framework evolution and is the basis for our annual incident pattern analysis.
The PSF is a living standard. Research findings directly inform version updates. Domain weightings, assessment criteria, and coverage areas are reviewed annually against practitioner-reported incident data.
As AI regulation matures globally — EU AI Act, UK AI Safety Institute, US executive orders — PAI publishes guidance mapping PSF compliance to regulatory obligations, so practitioners do not need to do this mapping themselves.
Annual surveys of the PAI practitioner community capture deployment patterns, tool adoption, organisational maturity, and self-assessed PSF compliance. Findings are published openly.
PAI's incident registry is built on anonymised case data contributed by certified practitioners and CAI-recognised organisations. If you have experienced a production AI incident and are willing to share the details under anonymisation, we welcome your contribution. All contributors are acknowledged in published analysis.