ChatGPT
OpenAI says ChatGPT conversations may be used to improve models unless the user turns off model improvement in Data Controls.
This is a transparency index, not legal advice. It records what is publicly stated today and where the public answer is still incomplete.
OpenAI says ChatGPT conversations may be used to improve models unless the user turns off model improvement in Data Controls.
Google says that when Gemini Apps Activity is on, Gemini data is used to improve Google AI with help from human reviewers.
Microsoft says files, communications, prompts, responses, and Microsoft Graph data used with Microsoft 365 Copilot are not used to train foundation models.
Meta says it uses public information from adult accounts and interactions with AI at Meta features to develop and improve generative AI models, with a right to object.
X says public X data plus interactions, inputs, and results with Grok may be shared with xAI to train and fine-tune Grok and other generative AI models.
Perplexity says AI data retention is enabled by default for Free, Pro, and Max users, and that users can turn it off in account settings.
Anthropic describes a model-improvement setting for consumer chats and says incognito chats are not used to improve Claude.
GitHub says that by default it, its affiliates, and third parties do not use individual-subscriber Copilot data, including prompts, suggestions, and code snippets, for AI model training.
Cursor says code is not used for training when Privacy Mode is enabled; when Privacy Mode is off, Cursor may use stored codebase data, prompts, editor actions, and snippets to improve AI features and train models.
Notion says it does not use Customer Data, including user content under personal terms, or permit others to use it to train the machine-learning models used to provide Notion AI.
Slack says Customer Data is not used to train generative AI models unless the customer gives affirmative opt-in consent.
Zoom says it does not use customer audio, video, chat, screen sharing, attachments, or other communications-like customer content to train Zoom or third-party AI models.
Grammarly says individual-account Product Improvement and Training is on by default, while enterprise and certain sales-led accounts have it off by default.
Canva says privacy settings control whether general usage data and User Content can improve AI-powered features, and that Canva Education User Content is not used for AI training.
Adobe says Firefly does not train on customer data and that Firefly uses commercially safe datasets such as licensed content and public-domain material.