Production AI Institute · PSF v1.1 open standard
AI Right-To-KnowAI Data Use IndexCheck My AI ToolsAgent ReadinessPublic BenchmarkContactGlobal standard · Australia/APAC founded
<- AI Data Use Index
Anthropic · Consumer service

Claude

User-controlled

Anthropic describes a model-improvement setting for consumer chats and says incognito chats are not used to improve Claude.

Training use

The cited current help page explains what happens when users allow chats or coding sessions to improve Claude, and points to privacy settings for control.

How to opt out

Adjust privacy and model-improvement settings in Claude privacy controls; incognito chats are excluded from model improvement.

Private content

Anthropic advises users to be thoughtful about highly sensitive information and describes privacy protections when chats are used for improvement.

Retention

The cited article explains protections but does not provide one short retention answer for ordinary users.

Human review

Anthropic says access is limited to a small number of personnel involved in model training when users allow improvement.