The accusations landed in a terse blog post, but their implications reach far beyond Silicon Valley. Anthropic, the American artificial intelligence company behind the Claude chatbot, says three fast-rising Chinese labs illegally siphoned off its technology to accelerate their own models, raising concerns that stretch from intellectual property to national security.
In a post, Anthropic has accused DeepSeek, MiniMax and Moonshot AI of creating more than 24,000 fraudulent accounts and generating over 16 million exchanges with Claude in order to train competing systems through a technique known as distillation. The practice involves using the outputs of a powerful model to train a smaller or newer one.
Distillation is widely used in the artificial intelligence industry. Frontier labs often distill their own systems to create cheaper, more efficient versions for customers. But most leading proprietary model providers, including Anthropic, explicitly prohibit outside entities from using their models in that way. Claude is not available in China.
The allegations follow similar claims from Anthropic’s chief rival, OpenAI. Earlier this month, OpenAI told the US House Select Committee on China that DeepSeek and other Chinese firms had been illegally distilling its ChatGPT models over the past year.
DeepSeek startled the industry last year when it unveiled a model that appeared to approach the capabilities of ChatGPT while relying on fewer computing resources. The breakthrough challenged the prevailing assumption that ever larger amounts of processing power were required to train cutting-edge systems. It also raised questions about the effectiveness of American export controls designed to limit China’s access to advanced chips.
OpenAI said at the time that it was reviewing evidence that DeepSeek “may have improperly distilled” its models. In its memo this month, the company argued that DeepSeek’s rapid progress stemmed from “its ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs.”
DeepSeek has not publicly responded to OpenAI’s allegations.
Anthropic warned that models created through illicit distillation may lack the safety guardrails that American companies say they build into their systems. Without those protections, the company argued, such tools could heighten national security risks if used for cybercrime or the development of biological weapons.
They could also allow “authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” Anthropic wrote. “The window to act is narrow.”
The dispute comes amid a broader debate over whether Washington’s export controls have slowed China’s advance in artificial intelligence. After DeepSeek’s emergence, some critics argued that the restrictions had failed.
Anthropic offered a different interpretation. The fact that Chinese companies may have relied on distillation, it said, underscores the rationale for limiting access to advanced American chips and models. Exposing such attempts, the company argued, demonstrates that cutting-edge AI development cannot be sustained through innovation alone without access to high-end semiconductors.
Besides DeepSeek, MiniMax and Moonshot AI’s Kimi model have become prominent in China’s crowded artificial intelligence market, earning the nickname “AI tigers.” All three rank among the top 15 models on the Artificial Analysis leaderboard, a closely watched industry benchmark.
“In reality, these advancements depend in significant part on capabilities extracted from American models, and executing this extraction at scale requires access to advanced chips,” Anthropic said.
The clash reflects a new phase in the technological rivalry between the United States and China. As companies race to build more capable systems, questions over who controls the underlying technology, and how it is used, are becoming as consequential as the software itself.




