In the fast-moving world of artificial intelligence, a major U.S. company is pointing fingers at some of China’s leading AI developers. Anthropic, the firm behind the popular Claude AI model, claims that three prominent Chinese labs—DeepSeek, MiniMax, and Moonshot AI—are illegally copying its technology to boost their own systems. These Chinese companies, often called “AI tigers” for their strong performance, rank among the top models worldwide.
Anthropic made the accusations in a blog post released on February 23, 2026. The company says the Chinese labs created more than 24,000 fake accounts to trick the system and access Claude. Through these accounts, they generated over 16 million conversations with Claude. This process, known as distillation, involves using outputs from a stronger AI (like Claude) to train a weaker one. It’s a common technique in the industry—many labs distill their own models to create cheaper, smaller versions for users.
However, Anthropic says this case is different and illegal. Claude is not officially available in China due to national security rules, and Anthropic’s terms of service ban using it to train rival models. The labs allegedly used fake accounts and other tricks to get around these restrictions, focusing on Claude’s advanced skills in reasoning, coding, tool use, and more.
This isn’t the first time U.S. AI companies have raised similar concerns. Earlier in February 2026, OpenAI sent a memo to the U.S. House Select Committee on China. In it, OpenAI accused DeepSeek and others of doing the same thing with ChatGPT—distilling its models over the past year to “free-ride” on American innovations. DeepSeek has not publicly responded to these claims from either company.
The controversy highlights ongoing tensions in the global AI race. Last year, DeepSeek surprised experts by releasing a powerful model that performed nearly as well as top systems like ChatGPT but used far less computing power. At the time, this raised doubts about whether U.S. export controls on advanced chips were working. Those controls aim to limit China’s access to the powerful hardware needed to train cutting-edge AI from scratch.
Anthropic argues that the new evidence actually supports keeping those controls in place. Without access to American models through distillation, Chinese labs would struggle to match U.S. advances quickly. The company warns that models built this way might skip important safety features that U.S. firms build in, such as protections against misuse. This could lead to risks like cybercrimes, bioweapons development, disinformation, or mass surveillance by authoritarian governments.
Anthropic calls the window for action “narrow” and urges faster cooperation among companies, governments, and the AI community to stop these practices.
So far, DeepSeek, MiniMax, and Moonshot AI have not commented publicly on Anthropic’s specific allegations. The claims add fuel to the debate over fairness, innovation, and security in AI. As both the U.S. and China push to lead in this technology, questions about how much “borrowing” is allowed—and what it means for global competition—continue to grow.








