Did you know that there’s a way of using outputs from LLMs that may involve no hacking—essentially just taking large quantities of text and repurposing it as training data—that upsets AI companies a great deal?
In a blog post on Monday, Anthropic said that the China-based AI companies DeepSeek, Moonshot, and MiniMax broke Anthropic’s rules in order to “illicitly extract” the capabilities of its signature AI model, Claude.
Distillation is a normal practice used by AI companies in which a “teacher” model is prompted with specifically tailored inputs, and the answers provided allow a “student” model to rapidly improve. For example, Anthropic writes, “frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers.” So to distinguish the actions Anthropic is complaining about from uses of distillation perceived as legitimate, these actions are referred to as “distillation attacks.”
Are distillation attacks criminal offenses in the eyes of Anthropic? No such thing seems to be alleged here, but these acts were carried out, Anthropic says, “in violation of our terms of service and regional access restrictions.”
Anthropic, which is itself dealing with the threat of being labeled a “supply chain risk” by the Pentagon, strikes a patriotic note in the post. Circumventing regional use restrictions and breaking rules, allows “foreign labs, including those subject to the control of the Chinese Communist Party, to close the competitive advantage that export controls are designed to preserve through other means,” it claims.
Among the three China-based companies mentioned, Shanghai-based MiniMax, creator of the viral character chat app Talkie, offended Anthropic the most with the scale of its distillation effort: over 13 million alleged exchanges. That’s compared to Moonshot with over 3.4 million, and the most famous company named in the post, DeepSeek, with only an estimated 150,000.
OpenAI, Anthropic’s main competitor, is also mad about distillation from at least one Chinese AI company, having sent a memo to the House of Representatives earlier this month, accusing DeepSeek of “ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs.”
DeepSeek is expected to release its latest flagship model, DeepSeek V4 any day now, and CNBC has warned that this release could cause chaos on Wall Street, at a time when there’s already enough AI-related chaos on Wall Street to go around.

