AI company Anthropic, the maker of the Claude chatbot, has alleged that three Chinese artificial intelligence firms such as DeepSeek, Moonshot AI, and MiniMax misused its platform by drawing on Claude’s responses to train their own models.
According to reports, the companies collectively set up more than 24,000 fake accounts and generated over 16 million interactions with Claude. The scale of activity has raised serious concerns about possible violations of Anthropic’s terms of service and broader questions around how AI systems are being trained and monitored.
In a press note, Anthropic said the companies relied on a technique known as “distillation,” where a smaller or less capable model improves itself by studying the outputs of a more advanced system. The company pointed out that while distillation is widely used within organizations as a legitimate training approach, it can be misused by competitors seeking to extract capabilities from other labs.
READ: Anthropic CEO says AI will exceed cognitive capabilities of most humans (
Anthropic argued that such practices allow rival firms to sharpen their own models while cutting down the time, effort and cost it would otherwise take to build those capabilities independently.
Anthropic also wrote on X: “We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.”
DeepSeek, which recently drew attention for its R1 model, has been accused of tapping into Claude more than 150,000 times to gather its responses, according to data shared by Anthropic.
Moonshot and MiniMax are said to have logged more than 3.4 million and 13 million interactions with Claude, respectively. Anthropic stated, “We attributed the campaign through request metadata, which matched the public profiles of senior Moonshot staff. In a later phase, Moonshot used a more targeted approach, attempting to extract and reconstruct Claude’s reasoning traces.”
READ: Will AI make coders obsolete? Anthropic CEO’s comments fuel H1-B visa row (
The details suggest that Claude was being probed at scale to better understand how it processes and structures its answers, insight that could be used to build more capable AI systems. The allegations come at a sensitive moment, as the United States debates whether to further tighten restrictions on the export of advanced AI chips to China.
Anthropic is now urging what it describes as “a coordinated response across the AI industry, cloud providers, and policymakers” to push back against distillation-based attacks. The company says it is investing significant resources in building safeguards that make such activity more difficult to carry out and easier to detect.
The issue is not limited to one lab. OpenAI has previously raised similar concerns about DeepSeek, alleging that it sought to train its systems by distilling outputs from U.S.-developed models.
Reuters reported that in a memo to the U.S. House Select Committee on Strategic Competition, OpenAI claimed certain users believed to be linked to DeepSeek tried to access models through third-party routing services and extract outputs for distillation purposes.

