Google's Gemini image generation feature produced racially diverse images for prompts where historical accuracy required specific demographics — including Black Nazi soldiers and female Founding Fathers. The overcorrection of a diversity-promotion safety layer produced historically inaccurate and offensive outputs. Google suspended the feature.
Following criticism that AI image generators systematically excluded non-white people, Google implemented a diversity promotion layer in Gemini's image generation. The system was configured to introduce racial and gender diversity into generated images. For general prompts this worked acceptably. However, the same system applied diversity promotion to historical prompts where specific demographics were factually required — generating images of racially diverse Nazi German soldiers and female American Founding Fathers. The feature went viral as an example of AI overcorrection.
How the Production Safety Framework maps to this failure
A D2 failure caused by a safety layer that was not sufficiently contextualised. The diversity-promotion control was a valid response to a real problem (underrepresentation in AI image outputs) but was applied without a context classifier that could distinguish contemporary/fictional prompts from historical prompts requiring demographic accuracy. D5 also failed: no red-team exercise appears to have tested the intersection of the diversity layer with historical prompts before launch.
Specific PSF controls mapped to each failure point
Google suspended Gemini image generation of people in February 2024. Significant reputational and press coverage. Feature returned with revised handling in mid-2024.
The AIDA exam tests PSF knowledge across all 8 domains. Free to take, immediately verifiable.