What happened
Anthropic reported that three Chinese companies — DeepSeek, Moonshot and MiniMax — used responses from its chatbot Claude as training data for their own models. The mechanism is called distillation: a less powerful system "learns" from the answers of a more advanced one.
According to Anthropic, the volume of activity is large: over 13 million requests are attributed to MiniMax, Moonshot — over 3.4 million, DeepSeek — about 150,000. In total, more than 16 million requests were recorded, made through approximately 24,000 fake accounts.
Why it matters
At first glance — it's a matter of licensing and intellectual property. But a more significant risk is the circumvention of protective mechanisms and the reproduction of a model's behavior without quality control and restrictions. If the ecosystem allows mass "leakage" of knowledge from large models into smaller ones, we get a rapid spread of capabilities that were previously controlled by a narrow group of developers.
For Ukraine this means two practical threats: first, the accelerated dissemination of tools that can be used for disinformation or cyber operations; second, a threat to national digital sovereignty if key capabilities are reproduced without adherence to security standards.
How Anthropic reacted
The company stepped up monitoring of suspicious activity, began stricter account verification and introduced additional restrictions to make large-scale distillation harder. Currently Claude (including the Sonnet 4.6 model) remains available to users, but with strengthened protective filters.
"We believe this was a systematic attempt to gain access to the model's capabilities and use them as training data, which could bypass existing safety restrictions."
— Anthropic press office
What's next: regulation, technologies, partnerships
The AI industry is already discussing two paths of protection: technical (for example, watermarks on responses, API limits, improved user verification) and regulatory (unified rules for access to large models and sanctions for abuse). Experts emphasize that solutions must be international — unilateral actions by individual companies will not close the problem.
For Ukraine this is a reminder of the need to invest in national AI capabilities, cybersecurity standards and participation in international agreements on the responsible use of technologies.
Conclusion
The incident with Claude is not just a scandal about commercial rights. It demonstrates how quickly the knowledge embedded in large models can spread beyond control and what the consequences for security and sovereignty might be. Now the ball is in the court of regulators, industry and countries seeking to safeguard their digital interests. Whether there will be enough time and tools — that question remains open.