HomeArticle

ATH shows off its strength, Alibaba AI breaks through again

晓曦2026-04-03 15:25
"Three releases in four days", Alibaba's AI enters the "systematic" harvest period.

Just two weeks after the establishment of the Alibaba Token Hub (ATH) business group, Alibaba released three major models in quick succession at an unprecedented pace: Qwen3.6-Plus, Qwen3.5-Omni, and Wan2.7-Image, achieving remarkable success in core areas such as multimodality, programming, and text-to-image generation.

The significance of the "three releases in four days" is not just the regular performance iteration of the models. It is also a clear display of strength by Alibaba AI after the organizational structure reshaping.

Previously, due to the movement of some core personnel, Alibaba's AI progress was once questioned by the capital market. The intensive release of these models is undoubtedly the most powerful response to external concerns. It not only showcases Alibaba's profound foundation and agile execution ability in the AI field globally but also further confirms the efficient collaboration of the new ATH organizational structure with practical results.

01. Results of "Systematic" R & D

Looking back, the "three releases" that shocked the market officially kicked off at the end of March.

First, on March 30, Alibaba released the full-modal native large model Qwen3.5-Omni, which achieved significant improvements in long context, multilingual, and audio - video understanding capabilities. At the same time, real - time interaction capabilities such as semantic interruption, voice color cloning, and voice control were added. This model refreshed the SOTA record in 215 tasks, and many core indicators even surpassed Google's Gemini - 3.1 Pro.

Following that, on April 1, the Wanxiang team under the Qianwen large model brought Wan2.7 - Image. As a unified model for image generation and editing, Wan2.7 performs extremely well in visual restoration, lighting logic, and semantic compliance. It is the best domestic model in the same category, approaching the global top - level standard, filling a key gap in the field of ultra - high - quality visual generation for domestic large models.

While the market was exclaiming at the "Alibaba speed", Qwen 3.6 - Plus was officially launched on April 2, focusing on agent intelligence, programming, and tool - calling capabilities, achieving a comprehensive leap in capabilities compared to the previous generation. In many authoritative programming evaluations, Qwen3.6 outperformed models such as GLM - 5 and Kimi - K2.5, whose parameter counts are two or three times that of Qwen3.6. With fewer parameters, it achieved stronger performance and became the benchmark for programming capabilities among domestic models.

In the latest Code Arena list under the globally well - known large - model blind - test list LMArena, which focuses on AI programming capabilities, Alibaba won the second place globally with Qwen 3.6 - Plus, surpassing international giants such as OpenAI, Google, and xAI, and becoming the highest - ranked Chinese large model on this list.

In just four days, three large models with completely non - overlapping directions were released intensively, and each one reached the global top - level standard. At the same time, applications such as Wukong and Qoder completed the integration of the new models immediately. This wide - coverage and high - density release rhythm is rare in the entire AI field.

The underlying logic of such a multi - dimensional "Alibaba speed" does not come from a single - point breakthrough of a single team but depends on the cluster effect formed by long - term multi - point layout and in - depth collaboration within the Tongyi Laboratory. Once this effect is triggered, it will burst out with an insurmountable technological inertia and finally achieve all - around success.

The concentrated realization of the above results marks that Alibaba AI has officially entered a more stable and collaborative "systematic" era. It not only confirms the breadth and depth of the technological foundation of the Tongyi Laboratory but also proves with strong resilience that under the support of a perfect talent echelon and standardized engineering paradigm, the movement of individual personnel will not affect Alibaba's core R & D rhythm.

02. Highly Consistent Strategic Steps

If the explosion on the model side confirms the "systematic" R & D, then the rapid follow - up on the application side shows the "strong collaboration" of the organization.

A significant development after the release of Qwen 3.6 is that AI applications such as Wukong and Qoder officially announced their integration immediately. These applications are all from the newly established ATH business group of Alibaba. This collective action in unison is the first concentrated outbreak of the efficient collaboration ability of this business group.

In the past, various departments in Alibaba were relatively independent. Model R & D, platform support, and front - end applications were scattered in different business units. Cross - department collaboration required complex processes, with high communication costs and difficulty in unifying strategic focuses. After the release of a new model, the application team often needed several weeks to complete the adaptation, and the whole process was inefficient, slowing down the adaptation speed from the model to the application.

However, everything has changed since the establishment of ATH. The core goal of ATH is clearly defined as "creating tokens, delivering tokens, and applying tokens". This statement is deeply in line with the underlying logic of "Token Economics" - in the AI era, tokens are not only technical units for measuring model computing power and output but also the "general equivalent" driving the circulation of the digital economy.

Now, ATH has achieved a high degree of resource concentration. It has integrated five major AI core forces within the group, including the Tongyi Laboratory, the MaaS business line, the Qianwen Division, the Wukong Division, and the AI Innovation Division. It is directly led by Wu Yongming, the group's CEO, and has completely opened up the connection from underlying technology to commercial realization in terms of organizational structure.

Looking at the Qwen 3.6 released this time, its excellent performance in application fields such as multimodality, text - to - image generation, and programming precisely meets the core requirements of ATH to increase token consumption and commercial penetration. It can be said that the establishment of ATH not only provides a solid infrastructure for the large - scale application of large models but also marks that Alibaba AI has entered an era of "strong collaboration" with highly consistent strategic steps.

03. Why Can ATH Succeed?

The strong collaboration brought by the ATH business group is being transformed into amazing explosive power for Alibaba in the field of AI implementation through the deep coupling of the underlying models and upper - layer applications.

Currently, the intelligent agent (Agent) track represented by "Lobster" is in the limelight, and Alibaba's offensive in this field is particularly fierce. Thanks to the deep integration of underlying technology and front - end applications after the establishment of ATH, products such as Wukong and Qoder were updated immediately after the release of Qwen3.6. This not only shows Alibaba's keen insight into the market and user needs but also reflects its amazing reaction speed in the actualization and productization of large models.

The reason why ATH can achieve this integration essentially lies in Alibaba's long - standing profound talent cultivation mechanism.

As early as 2019, Alibaba's DAMO Academy launched the pre - trained language model StructBERT based on the BERT architecture, taking the first step in systematic exploration. As a result, Alibaba became one of the earliest domestic enterprises to enter the large - model track.

Through years of talent cultivation and introduction, the Tongyi Laboratory has formed a complete talent echelon and technological accumulation, with a profound foundation. This allows it to have extremely rich technological reserves in fields such as pre - training, post - training, vision, and voice. In other words, the success of the Qianwen large model is itself the result of the Tongyi Laboratory's adherence to "long - termism". This also means that Alibaba's confidence in the large - model competition never relies on one or two geniuses but on a well - organized talent corps.

Another set of data also illustrates this point from the side. In the past year, despite the frequent movement of industry talents, the update frequency of the Tongyi Laboratory in technical communities such as GitHub and Hugging Face has always remained in the first echelon in China. The model iteration cycle is maintained at the "monthly" or even "weekly" level, and the model capabilities are always at the global leading level. This high - frequency and high - quality output is the most direct evidence of the stability of Alibaba's AI talent echelon and the maturity of its R & D system.

Ultimately, the competition of large models is destined to be a marathon race that tests the foundation and organization. The "Alibaba speed" demonstrated by ATH within just two weeks of its establishment sends a very clear signal to the outside world: Alibaba AI, with its profound technological precipitation and extreme collaboration mechanism, has not been tripped up by external doubts. Instead, it is reshaping the competitive landscape of the Chinese large - model track in a more powerful way.