Production AI Institute — vendor-neutral certification for AI practitioners
Verify a credentialFor organisationsContact
AI Incident Registry
HighTechnology·2022·Prisma Labs (Lensa AI)

Lensa AI Generated Sexualised Images of Women Without Consent

Lensa AI's 'Magic Avatars' feature, which trained a personalised AI model on user-uploaded selfies, disproportionately generated sexualised imagery of women. Researchers found women were significantly more likely to receive sexualised outputs than men, even from professional headshots. The app had no mechanism to prevent or disclose the risk of sexualised output generation.

D3 · Data ProtectionD2 · Output Validation

What happened

Lensa AI's Magic Avatars feature fine-tuned a stable diffusion model on approximately 20 user-uploaded photos to generate artistic portraits. Researchers and journalists documented that outputs for female subjects were significantly more likely to include bare shoulders, sexualised poses, and exposed skin — even when the source photos were professional headshots. The underlying model had been trained on data from the internet that systematically sexualised female subjects. Users were not informed this could occur, and there was no content control allowing them to opt out of sexualised outputs.

PSF Analysis

How the Production Safety Framework maps to this failure

A D2 failure enabled by a D3 training data problem. The model had learned sexualisation patterns from training data and applied them disproportionately to female subjects — a known failure mode of image generation models trained on unfiltered internet data. The critical D2 failure was the absence of any output safety layer: a basic NSFW classifier applied to outputs before display would have caught the majority of these cases. D3 failure at training time (no bias audit, no filtered training set) created the root cause.

Controls that would have prevented this

Specific PSF controls mapped to each failure point

1
D2 · Output Validation
Apply a NSFW classifier to all generated outputs and suppress or warn users before showing sexualised content.
2
D3 · Data Protection
Audit training data for sexualisation bias against female subjects; apply debiasing techniques or filtered training sets.
3
D3 · Data Protection
Obtain informed consent from users — disclose the risk of sexualised output generation before processing their biometric photos.

Outcome

Significant critical press coverage in late 2022. Prisma Labs implemented some content filtering. The case contributed to broader discussion of NSFW safety in image generation and non-consensual intimate image (NCII) protections for AI-generated content.

image-generationsexualisationconsentbiasNSFW

Related incidents

High2018
Amazon Recruiting AI Discriminated Against Women
D3D6
Critical2016
Microsoft Tay Chatbot Taught to Produce Hate Speech
D1D2
Medium2022
GitHub Copilot Reproduced Licensed Code Verbatim
D2D3
NEXT STEP

Prove you understand how to prevent failures like this

The AIDA exam tests PSF knowledge across all 8 domains. Free to take, immediately verifiable.

Take the AIDA exam →← All incidents