Are China’s “AI Tigers” Learning From U.S. Models Without Permission?

A dispute between leading artificial intelligence developers is adding new friction to the U.S.–China technology rivalry.

U.S.-based AI company Anthropic has accused several prominent Chinese labs — including DeepSeek, MiniMax and Moonshot AI — of extracting capabilities from its Claude model to advance their own systems.

Anthropic alleges the companies created thousands of accounts and used millions of interactions with Claude in a process known as distillation, a technique commonly used in the industry to transfer knowledge from larger models into smaller ones. While distillation itself is widely accepted, many proprietary AI providers prohibit using competitors’ models for that purpose.

The accusations follow similar concerns raised by OpenAI, which has said some Chinese firms may have distilled ChatGPT models. The issue has drawn attention because of DeepSeek’s rapid rise, which challenged assumptions about the computing resources required to build high-performance AI systems.

Anthropic warned that models trained through unauthorized distillation may lack safety safeguards and could create cybersecurity and national security risks if deployed at scale.

The claims also feed into broader debates over export controls, access to advanced chips and how governments regulate frontier AI development as competition between U.S. and Chinese companies intensifies.

Back