China rode that momentum exhausting. A yr after DeepSeek’s launch, there’s now a cohort of Chinese language open-source giants following the identical blueprint, together with Z.ai (previously Zhipu), Moonshot, Alibaba’s Qwen, and MiniMax. They’re all racing to launch extra succesful fashions, and they’re closing in on US rivals at a tempo few anticipated.
That issues as a result of AI hype is dying down, and corporations are shifting focus from buzzy pilots to deployment and integration, the place cheaper and extra customizable instruments are likely to win. Chinese language pricing means builders with restricted budgets can experiment extra, and open weights imply they’ll adapt fashions with out asking for permission.
A study by researchers at MIT and Hugging Face discovered that Chinese language open-weight fashions accounted for 17.1% of worldwide AI mannequin downloads over the yr ending in August 2025. That narrowly surpassed the US share of 15.86%—the primary time China had led on this metric. And Hugging Face data from final month reveals that Alibaba’s fashions, together with its Qwen household, now have probably the most user-generated variants—greater than fashions from Google and Meta mixed.
The open-source best, although, runs headlong into some exhausting realities. Chinese language fashions carry the imprint of China’s content material moderation regime and are educated to keep away from outputs that battle with authorities coverage. And in February, Anthropic accused a number of Chinese language labs of illicitly extracting capabilities from Claude by distillation, a course of the place you employ one mannequin’s outputs to coach one other. That’s a typical trade follow, however high US corporations like OpenAI and Anthropic declare that Chinese language corporations have used fraudulent strategies to do it.
