Production AI Institute · Independent certification for production AI practice
Verify a credential|Contact|
CAIG · Specialist

Study Guide: Certified AI Governance Professional

This guide covers all domains tested in the CAIG examination. Each domain includes key concepts, a worked scenario, and the reasoning approach examiners expect.

Take the exam — $79 →All certifications

Exam at a glance

Questions
Questions
25 drawn from a 30-question bank
Pass mark
18 correct (72%)
Time limit
45 minutes
Retake cooldown
72 hours
Fee
$79
Credential
Digital certificate + registry listing

Domain 1: EU AI Act & Regulatory Frameworks

~25% of exam

Key Concepts

  • EU AI Act risk tiers (unacceptable, high, limited, minimal)
  • Prohibited AI practices (social scoring, real-time biometrics)
  • High-risk AI system requirements (Annex III)
  • Conformity assessment procedures
  • GPAI (General Purpose AI) model obligations
  • CE marking and EU AI database registration
WORKED SCENARIO 1.1

Classifying a job screening AI under the EU AI Act

Your organisation has built an AI system that scores job applicants and ranks them for human recruiter review. Legal has asked whether this is a 'high-risk AI system' under the EU AI Act. How do you assess this?

Expert Analysis
  • Annex III of the EU AI Act explicitly lists AI systems used in employment and worker management — including CV screening, interview analysis, and candidate ranking — as high-risk.
  • High-risk classification triggers: conformity assessment, technical documentation, data governance requirements, human oversight mechanisms, transparency obligations to candidates, and registration in the EU AI database.
  • The fact that a human recruiter reviews the output does not remove the high-risk classification — the AI is still making consequential recommendations in an employment context.
  • The organisation must conduct a conformity assessment (which can be self-assessed for most employment AI), maintain technical documentation, and implement a risk management system.
Key Lesson: Under the EU AI Act, classification is determined by use case and domain, not by autonomy level. An AI that merely ranks candidates for human review is still high-risk if it operates in an Annex III category.
📋 Exam Tips for This Domain
  • Memorise the Annex III categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, border control, judicial administration.
  • Know that prohibited practices are absolute — no conformity assessment pathway exists for real-time biometric surveillance or social scoring.
  • GPAI models (like foundation models) have different obligations to downstream deployers — expect questions testing this distinction.

Domain 2: NIST AI RMF & ISO 42001

~20% of exam

Key Concepts

  • NIST AI RMF: GOVERN, MAP, MEASURE, MANAGE
  • The role of GOVERN as the foundational function
  • AI Risk Measurement: quantitative vs qualitative approaches
  • ISO/IEC 42001 as a management system standard
  • Risk tolerance and risk appetite in AI governance
  • Continuous improvement in AI governance frameworks
WORKED SCENARIO 2.1

Implementing NIST AI RMF in a financial services firm

A regional bank is deploying an AI credit scoring model and has been asked to demonstrate alignment with the NIST AI RMF. The CTO asks: 'Where do we start?' Walk through the correct implementation sequence.

Expert Analysis
  • GOVERN first: establish the organisational policies, accountability structures, and risk tolerance before any specific system assessment. Who owns AI risk? What is the board's risk appetite?
  • MAP second: contextualise the credit scoring AI — who are the affected populations? What harms could occur? What biases exist in the training data? How does this interact with fair lending regulations (ECOA, Fair Housing Act)?
  • MEASURE third: implement bias testing across protected characteristics (race, gender, age, national origin), accuracy testing, calibration testing, and explainability assessment.
  • MANAGE fourth: document risks, prioritise mitigations, implement monitoring, create an incident response plan. Ongoing: track metrics against risk tolerance.
Key Lesson: The NIST AI RMF is not a checklist — it is a lifecycle process. GOVERN must precede all other functions because without organisational accountability and risk tolerance, measurement and management lack direction.
📋 Exam Tips for This Domain
  • GOVERN is the most important function in the NIST AI RMF — it sets the foundation for all others. Expect questions about what GOVERN does that MAP/MEASURE/MANAGE do not.
  • ISO 42001 is a management system standard (like ISO 27001 for security) — it specifies how to structure governance processes, not what the technical AI must do.
  • The exam tests application, not memorisation — given a scenario, you should be able to identify which NIST AI RMF function applies.

Domain 3: Model Documentation & Algorithmic Accountability

~20% of exam

Key Concepts

  • Model cards: intended use, performance, limitations, ethical considerations
  • Data sheets for datasets
  • Algorithmic impact assessment (AIA)
  • Transparency obligations to affected individuals
  • Explainability (XAI) in regulated contexts
  • Accountability structures: who is responsible for AI outcomes
WORKED SCENARIO 3.1

Incomplete model card creates liability during regulatory audit

During a regulatory audit of your AI hiring system, the auditor asks for the model card. It exists, but only documents accuracy on the aggregate test set. The auditor asks: 'What is the false positive rate for candidates with disability-related employment gaps?' You do not have this data. What happens next and what should have been done?

Expert Analysis
  • An incomplete model card that only documents aggregate performance without disaggregated subgroup metrics is insufficient for regulatory purposes in high-risk domains.
  • The auditor will likely issue a finding of non-compliance with transparency and documentation requirements.
  • What should have happened: the model card should have included performance disaggregated across all relevant subgroups — including those with non-linear career histories, employment gaps, and protected characteristics.
  • Algorithmic impact assessments should have been conducted during development, not retrospectively.
Key Lesson: Model documentation is not a formality — it is the primary evidence of due diligence. In high-risk AI, documentation gaps become compliance gaps. Performance must be documented disaggregated, not just in aggregate.
📋 Exam Tips for This Domain
  • Model cards document a specific model. Data sheets document a specific dataset. Know which is which.
  • AIA (Algorithmic Impact Assessment) happens before deployment. Post-deployment audits review whether the AIA was adequate.
  • Explainability questions often ask: 'What would GDPR Article 22 require here?' — know that it requires human review, the ability to express a view, and an explanation.

Domain 4: Data Governance for AI

~20% of exam

Key Concepts

  • Data lineage: tracking data from source to model output
  • Data minimisation under GDPR
  • Legal basis for training data processing
  • Synthetic data: uses and limitations
  • Data quality standards for AI training
  • Retention policies for AI training data
WORKED SCENARIO 4.1

Training data provenance dispute — public web scraping

Your company has trained a model on web-scraped data. A rights holder claims their copyrighted content is in the training set and demands the model be retrained. Your legal team asks the AI governance lead: 'Do we know what was in the training data?' You do not have comprehensive data lineage records. What are the governance implications?

Expert Analysis
  • Absence of data lineage records is a critical governance failure. You cannot defend against copyright claims, cannot demonstrate GDPR compliance, and cannot assess whether training data contained PII or biased content.
  • Immediate actions: preserve all available metadata about data collection, begin a retrospective lineage reconstruction where possible, consult legal about the copyright claim.
  • Future prevention: implement a data governance policy that requires complete provenance documentation before any dataset can be used for training.
  • This scenario also highlights the GPAI (General Purpose AI) transparency requirements under the EU AI Act — foundation model providers must publish training data summaries.
Key Lesson: Data lineage is not optional for AI governance — it is the evidentiary chain that lets you defend against legal, regulatory, and ethical challenges. Without it, every training decision is unauditable.
📋 Exam Tips for This Domain
  • Expect questions testing whether GDPR's 'legitimate interests' basis can be used alone for training data — it cannot for special category data (Article 9 requires additional conditions).
  • Data minimisation means collecting only what is adequate and relevant — exam questions often present a scenario where more data was collected than needed.
  • Know the difference between the data controller (determines purpose) and the data processor (processes on behalf of controller) — AI vendors are often processors, deployers are controllers.

Domain 5: AI Incident Response & Ethics Governance

~15% of exam

Key Concepts

  • AI incident categories: model failure, bias event, security breach, harmful output at scale
  • Incident response plan components: roles, detection, containment, communication
  • Ethics committee composition and authority
  • Board-level AI risk governance
  • Whistleblower protection in AI governance
  • Regulatory sandbox purpose and scope
WORKED SCENARIO 5.1

Bias incident discovered by internal whistleblower

An employee reports to HR that your AI hiring tool has been systematically rejecting candidates from a particular university — which correlates with a protected characteristic. HR escalates to the AI governance lead. What is the correct sequence of actions?

Expert Analysis
  • First: protect the whistleblower. Governance failures fester in organisations where raising concerns has career consequences.
  • Second: begin an immediate investigation. Pull outcome data disaggregated by the reported characteristic. This should take hours, not weeks.
  • Third: if the data confirms bias, suspend the tool immediately. Do not wait for a remediation plan before suspending.
  • Fourth: notify affected candidates (depending on jurisdiction this may be legally required), notify the board, assess regulatory reporting obligations.
  • Fifth: investigate root cause — was this in the training data, the feature engineering, or the outcome labels? Fix the root cause, not just the symptom.
Key Lesson: Bias incidents are governance incidents first, technical incidents second. The right response sequence is: protect the reporter, investigate promptly, suspend if confirmed, notify, fix root cause.
📋 Exam Tips for This Domain
  • Know that the EU AI Act requires high-risk AI providers to report serious incidents to national market surveillance authorities.
  • Ethics committees must have genuine authority and diverse composition — exam questions often present rubber-stamp committees and ask what is wrong with the governance structure.
  • Regulatory sandboxes exist to allow innovation under supervision — they do not waive compliance requirements, they provide structured flexibility.

Ready to sit the examination?

You now have the conceptual foundation. The exam tests applied reasoning — read the scenario carefully and eliminate wrong answers by spotting the flawed assumption.

Purchase Exam Access — $79 →