Anthropic says three of China’s largest AI labs — DeepSeek, MiniMax, and Moonshot AI — have been using its Claude models without authorization to strengthen their own systems.
In a detailed statement released Monday, Anthropic alleged the companies ran “industrial-scale” distillation campaigns designed to extract capabilities from Claude. The startup said the activity involved roughly 24,000 fraudulent accounts generating more than 16 million interactions, violating its terms of service and regional access restrictions.
Claude is not available for commercial use in China, according to Anthropic, though the company said the rival labs found workarounds.
Distillation—the process of training a smaller or less capable model on the outputs of a more advanced one—is a legitimate and widely used technique in AI development. Many US firms rely on it internally. However, American companies have increasingly argued that some Chinese competitors are using it to replicate advanced capabilities without independently building them.
Read More: Anthropic Rolls Out Interactive Claude Apps for Slack and Workplace Tools
Anthropic said these campaigns are becoming more intense and sophisticated, warning that the issue extends beyond one company. It called for coordinated action among industry leaders, policymakers, and the broader AI community.
The accusations mirror earlier claims from OpenAI, which said in January 2025 that DeepSeek may have “inappropriately” used OpenAI outputs for training. More recently, Google reported an increase in “model extraction attempts,” also known as distillation attacks.
Anthropic shared unusually specific details about the alleged behavior. It said DeepSeek appeared to be developing “censorship-safe alternatives” to politically sensitive queries. In MiniMax’s case, Anthropic claimed it detected the campaign while it was still active, allowing the company to monitor the competitor’s tactics in real time.
“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours,” Anthropic said, adding that nearly half of MiniMax’s traffic shifted toward extracting capabilities from the new system.
Representatives for DeepSeek, MiniMax, and Moonshot AI did not immediately respond to requests for comment.
Read More: China DeepSeek Unleashes a Free Rival to GPT-5 Level Models
Anthropic argued that improper distillation is not only a competitive threat but also a security risk. Models trained through such methods may lack robust safeguards, including protections intended to prevent misuse in areas like bioweapons development.
In response, Anthropic said it has deployed behavioral fingerprinting systems, increased data-sharing with other AI labs, and developed additional countermeasures to detect and block large-scale extraction attempts.
The company’s CEO, Dario Amodei, has been outspoken about AI safety risks. He recently warned that leading AI models are approaching a point where, without strong guardrails, they could potentially assist in building biological weapons.
Amodei is also a vocal supporter of US export controls on advanced AI chips—a stance that has divided the tech sector. Jensen Huang, CEO of Nvidia, has repeatedly argued that restricting chip sales to China will not ultimately slow the country’s AI development.
Anthropic countered that alleged distillation attacks reinforce the case for export restrictions. Limiting access to advanced chips, it said, constrains both direct model training and the scale of unauthorized capability extraction.
Read More: No Code Required: Anthropic Launches Cowork for Claude Code
At the same time, Anthropic has faced scrutiny over its own training practices. In January, The Washington Post reported on an internal initiative known as Project Panama, described in company materials as an effort to “destructively scan all the books in the world.”
Last year, Anthropic reached a $1.5 billion settlement in a class-action lawsuit brought by authors and publishers. As part of the agreement, the company did not admit wrongdoing.



