On Monday, Chinese tech giant Alibaba introduced Qwen3, its latest family of AI models—and it’s making bold claims. According to the company, Qwen3 matches or even outperforms some of the most advanced models currently offered by AI leaders like Google and OpenAI.
Eight distinct models, ranging in size from 0.6 billion parameters to an enormous 235 billion, are included in Qwen3. In AI terms, parameters are the “brainpower” of a model—the more it has, the better it usually performs. Several of these models are already available—or will soon be—through open platforms like Hugging Face and GitHub, offering developers a chance to explore and experiment.
What makes Qwen3 especially interesting is its “hybrid” design. These models are built to be flexible: they can solve simple problems quickly or take more time to carefully “reason through” complex ones. It’s a feature that puts them in the same category as OpenAI’s o3 model, which also uses a reasoning mode for more accurate answers, though it comes with some delay.
Alibaba’s team explained in a blog post:
“We’ve integrated both thinking and non-thinking modes, so users can control how much time and computing power a task uses. It gives people more control and customization for their needs.”
Some versions of Qwen3 also use a Mixture of Experts (MoE) architecture. That means the model can break down problems into parts and assign them to smaller, specialized sub-models, making it more efficient without losing power.
Read More: Alibaba launches new AI version ‘better’ than DeepSeek
Global Reach, Deep Training
The Qwen3 models are multilingual, supporting 119 languages, and were trained on a massive dataset of nearly 36 trillion tokens (the basic units of language models—roughly, a million tokens is about 750,000 words). This training included a mix of textbook content, code, Q&A pairs, and even data generated by other AI systems.
Alibaba says this new generation of models is significantly more capable than the previous one, Qwen2. While Qwen3 models may not outperform every high-end model on the market, they hold their own across various tests and benchmarks.
The top-tier model Qwen-3-235B-A22B outperforms Google’s Gemini 2.5 Pro and OpenAI’s o3-mini on competitive platforms like Codeforces, a popular website for programming competitions. It also surpasses O3-mini on the AIME math benchmark and BFCL, a reasoning test. However, the largest model is not yet available to the public.

The most powerful model you can download right now is Qwen3-32B, and it still performs impressively. It even beats OpenAI’s o1 model on coding challenges like LiveCodeBench. According to Alibaba, Qwen3 also stands out for its ability to follow complex instructions and handle tool-calling tasks with precision.
Read More: Apple Partners with Alibaba to Launch AI in China
A Growing Challenge to the West
As Chinese-developed models like Qwen3 gain ground, U.S. companies face mounting pressure. The U.S. government has been tightening restrictions on exporting advanced AI chips to China, but that hasn’t slowed the country’s AI progress much, at least in terms of software.
The CEO of Baseten, an AI cloud platform, Tuhin Srivastava, put it like way:
“These open-source models like Qwen3 show that Chinese labs can still keep up. They’re state-of-the-art and open, so they’ll be used domestically even as global tensions rise.”
In addition to being downloadable, Qwen3 is available through cloud AI platforms such as Fireworks AI and Hyperbolic, giving more users access without needing huge compute power.
As Alibaba continues to push forward in AI, Qwen3 is a signal that open models from China are not just catching up—they’re becoming serious competitors in the global race for AI leadership.