Production AI Institute — vendor-neutral certification for AI practitioners
Verify a credentialFor organisationsContact
AI Incident Registry
HighTechnology·2018·Amazon

Amazon Recruiting AI Discriminated Against Women

Amazon built an AI recruiting tool trained on a decade of historical hiring data. Because most CVs submitted during that period came from men, the model learned to penalise CVs that contained words like 'women's' (as in 'women's chess club') and downgraded graduates of all-women's colleges. Amazon scrapped the tool.

D3 · Data ProtectionD6 · Human Oversight

What happened

Amazon built a machine learning system to automate CV screening, trained on 10 years of submitted CVs and hiring outcomes. The dataset reflected a male-dominated industry: the majority of applications and hires were male. The model generalised this pattern into a bias against female applicants. Amazon discovered in 2015 that the tool was penalising CVs that included the word 'women's' and down-ranking graduates of all-women's colleges. The tool was retrained multiple times but new biases kept emerging. Amazon ultimately scrapped it.

PSF Analysis

How the Production Safety Framework maps to this failure

A textbook D3 failure: the training data encoded historical discrimination, and no pre-deployment bias audit was performed. The D6 gap compounded the harm — automated ranking without human oversight meant the bias propagated at scale before it was detected internally. This case is foundational to understanding that data provenance (D3) is not just about privacy: it includes the fairness and representativeness of data used to train models that affect people.

Controls that would have prevented this

Specific PSF controls mapped to each failure point

1
D3 · Data Protection
Conduct a bias audit on training data before model deployment — specifically test for protected attribute proxies.
2
D6 · Human Oversight
Require human review of model outputs for hiring decisions rather than automated ranking alone.
3
D3 · Data Protection
Implement fairness constraints during training — equalised opportunity or demographic parity checks.

Outcome

System scrapped. Reuters reported the story in October 2018, generating significant regulatory attention and public scrutiny of AI in hiring. EU AI Act now classifies hiring AI as high-risk.

biashiringtraining-datadiscriminationfairness

Related incidents

Critical2018
Uber Self-Driving Car Kills Pedestrian in Arizona
D6D5
Medium2022
GitHub Copilot Reproduced Licensed Code Verbatim
D2D3
High2023
Italy Bans ChatGPT Over GDPR Violations
D3
NEXT STEP

Prove you understand how to prevent failures like this

The AIDA exam tests PSF knowledge across all 8 domains. Free to take, immediately verifiable.

Take the AIDA exam →← All incidents