Production AI Institute — vendor-neutral certification for AI practitioners
Verify a credentialFor organisationsContact
AI Incident Registry
CriticalHealthcare·2019·Optum (UnitedHealth)

Optum Healthcare Algorithm Systematically Underprovided Care to Black Patients

A widely used healthcare risk algorithm sold by Optum used healthcare spending as a proxy for health need. Because structural inequalities mean Black patients historically spent less on healthcare than equally sick white patients, the algorithm systematically underestimated the health needs of Black patients — directing care resources to white patients who were less sick. The study estimated 11.5 million Black patients in the US were affected.

D3 · Data ProtectionD6 · Human Oversight

What happened

Researchers at UC Berkeley published a study in Science in 2019 demonstrating that a commercial risk-stratification algorithm used by US health systems to identify patients needing complex care management contained systematic racial bias. The algorithm used healthcare cost as a proxy for health need. Because of systemic barriers to healthcare access, Black patients historically spent less on healthcare than equally sick white patients. The algorithm therefore concluded that Black patients were healthier than they actually were, allocating care management resources away from them. The algorithm was found to be in use across health systems covering an estimated 200 million people.

PSF Analysis

How the Production Safety Framework maps to this failure

A D3 failure at the most foundational level. The training label (healthcare cost) was a biased proxy for the intended outcome (health need). This is a data provenance problem: the label was not fit for purpose, and no bias audit identified the disparity before deployment at scale. D6 compounded the failure — clinical decisions were being made based on algorithm output without structured human review, despite the algorithm operating in a safety-critical domain (healthcare resource allocation).

Controls that would have prevented this

Specific PSF controls mapped to each failure point

1
D3 · Data Protection
Conduct a bias audit of the target variable (healthcare cost) before using it as a proxy for health need — test for correlation with protected attributes.
2
D6 · Human Oversight
Require clinical review of algorithmic care recommendations, particularly for populations known to face access barriers.
3
D3 · Data Protection
Replace cost proxies with direct health need measures where available; document the proxy decision and its known limitations.

Outcome

Optum acknowledged the flaw and committed to updating the algorithm after the Science paper was published. The study prompted major scrutiny of algorithmic bias in healthcare and contributed to proposed US federal guidance on AI in clinical settings.

healthcareracial-biasproxy-variablesfairnessalgorithmic-harm

Related incidents

High2018
Amazon Recruiting AI Discriminated Against Women
D3D6
Critical2018
Uber Self-Driving Car Kills Pedestrian in Arizona
D6D5
Medium2022
GitHub Copilot Reproduced Licensed Code Verbatim
D2D3
NEXT STEP

Prove you understand how to prevent failures like this

The AIDA exam tests PSF knowledge across all 8 domains. Free to take, immediately verifiable.

Take the AIDA exam →← All incidents