A fierce battle among Zhipu, StepAI, and Alibaba: Large models are back in the spotlight in 2023.
Text by | Zhou Xinyu
Edited by | Su Jianxun
After a long period of silence, the arena of large models has once again become a battlefield this summer.
The latest battle took place during the recently concluded "AI Spring Festival Gala" WAIC (World Artificial Intelligence Conference). The three parties in close confrontation are Jieyue Xingchen and Zhipu among the Six Little Tigers, and Alibaba, a strong model team among the big tech companies.
On July 25th, Jieyue Plan open - sourced its latest multimodal reasoning model: Step - 3. On the same day, Alibaba released its new Tongyi Qianwen 3 reasoning model.
On the 28th, Zhipu released its latest generation of foundational large model: GLM - 4.5. And Alibaba's offensive continued. On the same day again, Alibaba released a multimodal package, open - sourcing Tongyi Wanxiang 2.2, which covers three modalities: text - to - video, image - to - video, and unified video generation.
One quite intense moment was that on July 25th, the new Tongyi Qianwen 3 was even hailed by Alibaba as the "world's strongest". Three days later, GLM - 4.5 emerged as the new king and became the "SOTA" (strongest) among global open - source models. In the comprehensive performance list released by Zhipu, GLM - 4.5 ranked 3rd globally, while Tongyi Qianwen 3 ranked 9th.
△ Zhipu GLM - 4.5 ranked 3rd in the model comprehensive performance list. Image source: Zhipu
An employee of Zhipu told Intelligent Emergence that almost all the algorithm team members were closely watching the updates of Tongyi Qianwen. "It was so nerve - wracking," he said. "If the gap was too big, our late - comer status would be a joke." It wasn't until GLM - 4.5 outperformed in multiple evaluation sets including Agentic ability that the stone in his heart finally dropped.
The smoke of war at WAIC is a microcosm of the melee among the Six Little Tigers' models in the past two months.
As early as June, during the 5 - day release period, MiniMax's open - sourced reasoning model M1 led all open - source weight models in terms of context length and tool - using scenarios. Its video generation model Hailuo 2 created hit videos such as "Kitten Diving" overseas.
Just one month later, the newly open - sourced foundational model K2 of Kimi swept through and won 24 SOTAs among open - source models.
The transformed Baichuan Intelligence and Lingyi Wanwu were absent from both WAIC and the new round of model melee.
Chart by Intelligent Emergence.
After the release of DeepSeek V3 and R1, the Six Little Tigers had been quiet in the market for nearly half a year.
High - level executives leaving and talents departing have become the norm. A report from Maimai showed that as of early July 2025, 41.07% of the employees in the Six Little Tigers had set their status as "looking for opportunities".
The battles in the post - DeepSeek era are crucial for the Six Little Tigers to return to the center of the stage, or even for their survival. This report card will greatly affect the companies' subsequent capital operations and commercialization progress.
More importantly, after half a year of declining market reputation and a demoralized internal team, the Six Little Tigers urgently need a comeback to prove to both the inside and outside that they still have the confidence to stay in the large - model game.
However, the model battles in the post - DeepSeek era are still arduous. DeepSeek R1's early release proved that for a model to make a splash, it not only needs good performance but also an early release.
The feeling of being preempted still makes many of the "little tigers" palpitate. We learned that the training of K2 started preparations at the end of 2024 and was also Kimi's confident work, originally planned to be released in mid - 2025. However, the early release of R1 snatched the glory that might have belonged to Kimi.
To defend, on the same day as R1's release, Kimi had to release a somewhat regretful version, K 1.5. The final market response was not satisfactory.
The good reputation after K2's release, to some extent, soothed the pain of being preempted by DeepSeek. On the night of the release, Zhang Yutao, the co - founder of Kimi, wrote on his WeChat Moments: make kimi great again.
However, K2's early success also made Zhipu, which also focuses on Coding and Agentic abilities, feel aggrieved.
We learned that in order to win the SOTA battle for reasoning models, the training of GLM - 4.5 has taken nearly three months. In order to perform better in multi - agent tasks, Zhipu even abandoned its consistent Dense route and switched to the MOE (Mixture of Experts) architecture.
"At first, GLM - 4.5 was intended to be the first domestic model to compete with Claude 4," a practitioner told Intelligent Emergence. "It's a pity that Kimi's secrecy work was so good that we only learned the technical details on the day K2 was released."
Zhipu was caught off guard by Kimi. In the last month, it urgently intensified the training and managed to boost the Coding and Agentic abilities of GLM - 4.5 in the evaluation sets to a level that was "slightly behind" K2, occupying the highest position among the Six Little Tigers.
△ Zhipu's booth at WAIC. Image source: Photo taken by Intelligent Emergence
Now, with the intense competition on the field, these large - model companies have returned to the center of the stage, just like in 2023.
However, different from the general trend in 2023 of "emphasizing parameters and going closed - source" in the industry, Chinese models today have mastered the art of building a good technical reputation.
Since DeepSeek gained momentum, open - sourcing and releasing technical reports have become standard for the Six Little Tigers when releasing models.
"The first - batch users of models are definitely developers. If you don't capture developers, it's hard for the model to gain popularity," an AI application developer at WAIC told us.
He compared open - sourcing to the door for large models to reach developers: "It's very easy to find developers now. They are all gathered on Hugging Face and GitHub. All model manufacturers need to do is post the open - source links there."
Open - sourcing is the way, and the rest depends on technical strength.
Even after a turbulent half - year, the Six Little Tigers still have far more funds and talent reserves than ordinary startups.
It's obvious that in this round of model melee, none of the Six Little Tigers achieved a crushing victory over the others, but each had its own highlights:
MiniMax has firmly established itself at the top of the video - generation field;
Kimi K2 and GLM - 4.5 have successively won numerous model SOTAs;
Jieyue's Step - 3 leads the still - niche multimodal track.
There is no absolute winner, but after this round of model melee, the Six Little Tigers have almost all made a name for themselves on the global model battlefield.
Statistics from the model open - platform OpenRouter show that on July 28, 2025, Kimi K2 ranked 6th in global model call volume, and GLM - 4.5 reached the 20th position on the day of its release.
After MiniMax released Hailuo 2, the download volume of Hailuo AI soared. Didi Data shows that the single - day download volume of Hailuo AI on July 22nd reached 110,000.
△ Global model call volume ranking on July 28, 2025. Image source: OpenRouter
The four "little tigers" that have made progress have obtained a chance to continue in the game.
After continuously releasing models and products including M1 for five days, MiniMax spread the news of preparing for an IPO. Similarly, at Jieyue's press conference, after the release of Step - 3, what was finalized was a financing of over $500 million from institutions such as Shanghai Guotou, and an annual revenue target of "1 billion yuan" mentioned by Jiang Daxin, the CEO of Jieyue Xingchen.
However, the model war is far from over. With the consensus that "China doesn't need so many foundational models", the Six Little Tigers are still far from being able to relax in the model war.
The competition is not only within the Six Little Tigers. Alibaba's successive attacks during WAIC have sounded the alarm for the Six Little Tigers. In several model directions that the Six Little Tigers can't cover all at once, such as multimodal, Coding, and Agent, big tech companies can easily go all - in.
For the remaining four "little tigers", a new round of elimination has just begun.
Welcome to communicate!