“Qwen 2.5-Max outperforms […] almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B”
By Nileena Sunil
Chinese e-commerce and tech giant Alibaba has released its new AI model on Jan. 29, called Qwen 2.5-Max, it claims to outperform DeepSeek, ChatGPT, and other AI models. The release comes in the wake of another Chinese AI model DeepSeek emerging as a major disruptor in AI in the past week.
With its efficiency and cost-effectiveness, DeepSeek surpassed leading western models by far, reports said. Since the release of DeepSeek’s AI models V3 and R1 which DeepSeek claims to have developed in two months at a cost of under $6 million—the chatbot has risen to the top ranking of the U.S. Apple App Store, and sparked a sell-off that rattled global markets.
READ: China disrupts AI market with DeepSeek: A better, cheaper version of ChatGPT? (January 27, 2025)
In the midst of DeepSeek’s dominance over western AI models, Alibaba’s latest AI model entered the race, offering competition to both DeepSeek and the western AI models such as OpenAI’s ChatGPT, Google’s Gemini, and more.
“Qwen 2.5-Max outperforms […] almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” Alibaba’s Cloud unit said in an announcement posted on its official WeChat account, referring to OpenAI and Meta’s most advanced open-source AI models.
Qwen 2.5-Max was released on the first day of the Lunar New Year, when most Chinese people are away from work and with their families. This indicates the pressure DeepSeek’s meteoric rise has exerted over its competitors, not just globally but also in the domestic market. This can also be seen with other competitors like ByteDance scrambling to release their own AI models.
READ: China’s AI DeepSeek-V3 stuns, disrupts and rattles Silicon Valley (January 27, 2025)
Hangzhou-based Alibaba Cloud also said Qwen 2.5-Max was pretrained on more than 20 trillion tokens. The model is available for developers and enterprises to access on its website. Qwen 2.5-Max’s strong performance, according to Alibaba Cloud, shows that the expansion of data scale and model parameters can effectively improve intelligence of an AI model.


