Anthropic is making some major changes related to how it is handling user data. Claude users will have to decide by Sept. 28, if they want to let data from their conversations be used for training AI models. This is a departure from its previous policy, which ensured consumer chat data wasn’t used for AI training.
Previously, users of Anthropic’s products were told their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies, in which case a user’s inputs and outputs might be retained for up to two years.
READ: Anthropic to offer its products to the government for $1 (
Anthropic stated in its blog post that this update will “help us deliver even more capable, useful AI models.” They apply to users on Claude Free, Pro, and Max plans, including when they use Claude Code from accounts associated with those plans. On the other hand, they do not apply to services under Anthropic’s commercial terms including Claude for Work, Claude Gov, Claude for Education, or API use, via third parties such as Amazon Bedrock and Google Cloud’s Vertex AI.
Anthropic says that by not opting out, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” Users will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.” However, another likely reason is that access to millions of Claude interactions will help Anthropic train its AI models to improve its competitive positioning against rivals like OpenAI and Google.
READ: Anthropic restricts OpenAI’s access to Claude models (
These changes also reflect broad shifts in industry policies. Companies like OpenAI and Anthropic have been facing increased scrutiny over its data retention practices. For instance, OpenAI is currently fighting a court order that forces the company to retain all consumer ChatGPT conversations indefinitely, including deleted chats, because of a lawsuit filed by The New York Times and other publishers.
There have been concerns about how these changes in policy might create confusion and that users might not fully understand what they are signing up for. A Verge report mentioned that Anthropic’s new policies raises concerns that users might quickly click “Accept” without noticing they’re agreeing to data sharing. Meanwhile, privacy experts have long warned the complexity surrounding AI makes meaningful user consent practically unattainable.

