Anthropic asserts three Chinese firms misused its AI tools to train their models: ‘How the turn tables’

Anthropic has leveled accusations against three well-known Chinese artificial intelligence firms, claiming they extensively used its Claude chatbot to secretly train competing models—a surprising twist in a years-long global discussion about where fraud ends and standard industry practices begin.

In a statement, San Francisco-headquartered Anthropic alleged that Chinese labs DeepSeek, Moonshot AI, and MiniMax breached corporate law by engaging with Claude, its . “We’ve uncovered large-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to improperly extract Claude’s capabilities to enhance their own models,” the company stated. “These labs generated over 16 million interactions with Claude via roughly 24,000 fraudulent accounts, violating our terms of service and regional access limitations.”

According to Anthropic, the Chinese companies relied on a method called “distillation,” where one model is trained using the outputs of another—often a more advanced system. The alleged campaigns targeted areas Anthropic views as key unique selling points of Claude, including complex reasoning, coding support, and tool utilization.

Anthropic argues that while distillation is a “widely used and lawful training method,” the Chinese firms’ application of it here may have been “for improper purposes.” Using extensive networks of fake accounts to replicate a competitor’s proprietary model violates its terms of service and undermines U.S. export controls designed to limit China’s access to cutting-edge AI, Anthropic noted, calling for “swift, coordinated action among industry players, policymakers, and the global AI community.”

Though not exactly distillation, Anthropic recently faced copyright infringement accusations from thousands of authors, who alleged the company downloaded books in bulk from shadow libraries to train its AI models instead of purchasing copies and scanning them itself. In September 2025, Anthropic settled that lawsuit for $1.5 billion, paying authors approximately $3,000 per book for around 500,000 works.

What the Chinese firms are accused of

The company claims the three labs circumvented geofencing and business restrictions that limit Claude’s commercial availability in China by routing traffic through proxy services that resell access to major Western AI models. Anthropic stated that one such “hydra cluster” operated tens of thousands of accounts simultaneously to distribute requests across different API keys and cloud providers.

Once those accounts were set up, the labs allegedly created automated long, high-token conversations intended to extract detailed, step-by-step answers that could be fed back into their own systems as training data. According to Anthropic, this resulted in an unrecorded pipeline that turned Claude into an unwilling instructor for models being developed in China’s increasingly competitive AI industry.

Anthropic has not yet announced specific legal action against the three companies, but it has indicated that it has blocked known access points and is pushing Washington to strengthen export controls on advanced chips and AI services to prevent similar attempts in the future.

‘How the turn tables’

If Anthropic was hoping for sympathy, the reaction and commentary from industry observers have been notably skeptical. Commentators quickly pointed out that Anthropic itself has faced high-profile accusations of overstepping in its data collection practices beyond the authors’ copyright case—such as a . “How the turn tables,” wrote a commenter on , quoting a line from TV mockumentary series The Office.

Beneath the criticism lies a larger battle over who sets the rules for an industry built on repurposing human work. U.S. firms such as Anthropic and OpenAI have increasingly advocated for strict enforcement against foreign competitors they accuse of copying proprietary systems, even as they defend their own extensive data collection under the banner of fair use.

Chinese labs, many of which release more open-source models, are racing to narrow the performance gap with Western rivals using any legal edge they can find. With Washington already debating stricter restrictions on exporting AI chips and cloud services to China, Anthropic’s allegations are likely to fuel calls for new safeguards—while giving critics one more chance to note the uncomfortable symmetry at the heart of modern AI.

For this story,  journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing.