Taking down the "enemy", Google Cloud taught Alibaba Cloud a lesson.
A US subsidiary under Haier has become a pioneering case for Google Cloud to build its own AI paradigm.
Its name is GE Appliances, which was acquired by Haier Group in 2016. Its main business is the research, development, and production of household appliances.
Now, GE Appliances has embedded Google's Gemini Enterprise Agent Platform into its intelligent function manufacturing data platform to track production lines, performance, and parts management.
How many agents does GE Appliances in the US have? Google officially states that there are 800.
This might be the largest publicly disclosed enterprise - level agent deployment case in the global manufacturing industry so far. The last time there was a deployment of such a large scale was at the end of 2025 when Volcengine collaborated with Hailiang Group to create more than 600 scenario - based agents in three months, with an estimated annual token consumption of over 50 billion.
Creating AI benchmark cases to drive the accelerated scaling of agents is the common rhythm among global cloud providers. The more agents there are, the more tokens are consumed, and the more mature the customers' AI transformation becomes.
As Google CEO Sundar Pichai said, last year, the AI industry was keen on discussing how to build an agent. Now, the focus of the topic has shifted to "how to manage thousands of agents."
The same goes for Google. It focuses on strengthening its cloud - native enterprise agent platform capabilities, enabling GE Appliances to communicate and manage with more than 700 suppliers using agents, resulting in a 25% decrease in the proportion of out - of - stock orders. GE Appliances also developed an agent called "Quality Insights" to improve product design and accelerate the improvement process, and as a result, it found millions of dollars in business opportunities in actual operations.
Google has always been a global benchmark for the full - stack AI closed - loop, and Alibaba is the only domestic Internet giant that can be compared with Google.
Google and GE Appliances started their AI cooperation in 2023. Three years later, 800 agents are everywhere.
In October last year, Alibaba also started a comprehensive AI cooperation with Haier Group. Wu Yongming and Zhou Yunjie stood side by side to carry out full - stack AI cooperation in scenarios, platforms, models, and computing power. When will Haier be able to create a benchmark case at the level of GE Appliances?
The story continues.
From a broader industrial perspective, Google has not only created benchmark customer cases but also turned "enemies" into customers, concentrating and carrying the competition in the entire AI industry on its own infrastructure.
Anthropic and Meta at the model level, NVIDIA at the chip level, and Apple phones at the device level are Google's competitors, but they are also options for serving numerous users on Google Cloud.
Winning the competition is important, but more importantly, making the competition take place within its own ecosystem. This might be an important lesson that Google can teach Alibaba Cloud.
What is Google's differentiation?
"We believe that the future of artificial intelligence must be open, while others want to confine you in a walled garden," said Thomas Kurian, CEO of Google Cloud.
The subtext of this statement is clearly aimed at competitors Microsoft (Azure) and Amazon (AWS). Competitors want to bind models, data, and agents within their own ecosystems, but Google provides an integrated solution that allows enterprises to independently control AI architectures and data.
As one of the world's three major international cloud providers, Google Cloud has a global market share of about 14%, far lower than Amazon AWS's 28% and Microsoft Azure's 21%. However, AI is accelerating the transformation of cloud services, and Google Cloud has become the fastest - growing among the three public cloud providers.
In the fourth quarter of last year, Google Cloud's revenue growth rate was 48%. It was not only the fastest - growing among all the businesses of Google's parent company Alphabet but also at least 10% higher than the growth rates of many competitors.
The latest data disclosed by Google Cloud in April shows that customers directly use APIs to call self - developed models, and now it processes more than 16 billion tokens per minute, higher than the 10 billion in the previous quarter.
Alibaba Cloud has not disclosed relevant data. Coincidentally, at the beginning of April, Volcengine disclosed that the daily token usage of the Doubao large - model has exceeded 120 trillion, doubling within three months. Based on the assumption that API calls account for 80%, it is estimated that the Doubao large - model processes about 66.7 billion tokens per minute.
Of course, the token calculation scope and calculation prices of Google Cloud and Volcengine are different, but it is undeniable that the three companies with the highest global token consumption are OpenAI, Google, and ByteDance.
At the recent Google Cloud Next 26 annual conference, the strategy of "multi - cloud, multi - chip, and multi - model" constitutes the core foundation for Google Cloud's high - speed growth. Google has once again strengthened its differentiated narrative from Amazon AWS and Microsoft Cloud:
Chip freedom: You can use Google's self - developed eighth - generation TPU designed for the agent era, or choose the NVIDIA Vera Rubin NVL72 rack - level system first launched on Google Cloud;
Model freedom: Users can choose from Gemini + Anthropic Claude + open - source Gemma + any third - party models;
Data freedom: It supports data to remain in any location such as AWS, Azure, or on - premise, and agents can run without data migration;
Governance freedom: A series of products such as Agent Identity allow enterprises to control agent permissions, audits, and security by themselves.
This means that in the agent era, Google not only emphasizes how smart its models are but also provides a full - stack integrated system. From the core TPU chips to the agent management system for security defense, it transforms technical capabilities into standardized infrastructure to help enterprises' AI agent systems scale and be implemented systematically.
It is worth noting that Google Cloud is simultaneously addressing the two short - comings of the enterprise agent market and high - end computing chips, directly competing with Anthropic and NVIDIA.
The enterprise market occupied by Anthropic makes all competitors envious. The latest data shows that Anthropic has a market share of about 40% in the enterprise - level agent market, while OpenAI accounts for about 27% and Google about 21%. Anthropic's annualized revenue has exceeded $30 billion, which has boosted its valuation in the private secondary market to over $1 trillion.
Therefore, Google Cloud uses the case of GE Appliances to promote its enterprise - level agent platform, aiming at Anthropic's dominant position.
In the chip layer, Google's eighth - generation TPU has completed the architecture split, divided into two dedicated product lines for training and inference. TPU 8t (Sunfish) is better at handling large - scale, computationally intensive training tasks, while TPU 8i (Zebrafish) is designed for inference scenarios, especially large - scale agent interaction scenarios. Both products are planned to be officially launched this year.
The more Jensen Huang emphasizes the "versatility" of NVIDIA GPUs, the more Google pursues specialized division of labor in chips. Training and inference are no longer combined but aim for higher energy efficiency in different divisions under constraints such as power. Computing power is important, but more importantly, it is about how to use computing power effectively. This is a clear signal from Google.
Of course, Google has not directly compared the performance of the eighth - generation TPU with NVIDIA chips, and there are still uncertainties in the chip supply chain and development cycle.
How far is Alibaba from Google?
In the past year, Alibaba has been the one that has learned from Google the most deeply.
Introducing top overseas AI talents into the team has become a common choice among domestic technology giants. In 2025, ByteDance recruited Wu Yonghui from Google DeepMind to be in charge of the large - model project. Tencent poached Yao Shunyu from OpenAI as its chief AI scientist. A few months later, Tencent's Hunyuan Hy3 Preview was officially launched, becoming one of the core AI achievements. Alibaba also recruited Zhou Hao, a former senior researcher at Google DeepMind, who joined Tongyi Laboratory and reported directly to Zhou Jingren, the CTO of Alibaba Cloud.
Organizational restructuring is also necessary for talents to give full play to their abilities.
Just as Google merged DeepMind and Google Brain to form a unified combat system through vertical integration of the organizational structure. In contrast, in China, Tencent disbanded its AI Lab and established the AI Infra Department, AI Data Department, and Data Computing Platform Department. Alibaba established the ATH Business Group, vertically integrating all AI - related businesses and shouldering the mission of independent commercialization of MaaS (Model as a Service), with Wu Yongming as the person in charge.
In terms of models, since this year, native multi - modal models have become the focus of competition among companies. ByteDance's Seedance 2.0 has created a huge buzz in the film and television industry. Alibaba's Tongyi Qianwen Qwen3.5 clearly aims to compete with Google's full - line large models. The world - model product HappyOyster, which was released in April and can be built and interacted with in real - time, belongs to the same world simulator genre as Google's Genie3.
Models drive the improvement of product capabilities, which in turn profoundly affects market perception. This is the mainstream direction of AI evolution. The difficulty lies in that MaaS has become the core track in the second half of the AI industry competition. How to create products for enterprise - level agent construction and governance?
ByteDance entrusts this mission to Volcano Ark under Volcengine. The core of Alibaba's MaaS is the Bailian Platform. Based on the Qianwen base model, the platform is responsible for post - training of models for various vertical application scenarios, developing B - end agents, and enterprise delivery. In addition, DingTalk has entered the market as an enterprise - level AI native work platform. It is an independent application and also the unified outlet for Alibaba's AI capabilities in the enterprise work scenario.
Wukong + Bailian represents Alibaba's organizational - level bet in the agent era and is a direct comparison with Google's Vertex AI + Workspace.
In this direction, Alibaba Cloud recently launched the enterprise - level agent construction platform "JVS Crew", which integrates an AI intelligent assistant (Clawbot) and a cloud - based independent environment (CloudSpace). Enterprises can have production - level agent construction capabilities.
However, as Google's MaaS strategy continues to iterate and upgrade, with the core evolving from Vertex AI to an enterprise - level agent platform, Google emphasizes an open model ecosystem + full - stack agent governance, which provides more reference for domestic peers.
Using AI to combat AI security risks has become a high - priority strategy under Google's MaaS system.
Since this year, the Claude 4.6 model has been released. Its stronger AI code analysis ability has significantly enhanced the automation level of security defense. However, it has also raised concerns about its potential use in accelerating cyber - attacks. The frontier security issues in the AI industry are no longer limited to preventing the generation of harmful content but focus on the operational security of agents, automatically identifying code vulnerabilities and outputting repair solutions.
"To become a truly autonomous enterprise where employees can act independently and reliably like team members, an enterprise needs an underlying foundation that can maintain a long - term security and trust system," Pichai mentioned.
Although artificial intelligence may increase security risks, Google Cloud customers can now use the power of artificial intelligence to protect their organizations. Therefore, Google has released a series of new agent solutions for threat detection, including a new AI Application Protection Platform (AI - APP).
AI is both a productivity tool and a security defense weapon. Using AI to resist AI - derived risks is an important ability indicator for testing a cloud ecosystem in the future.
Friendship and rivalry will be the main theme
Going back to the initial question, what does it mean that GE Appliances has deployed 800 enterprise - level agents on Google Cloud?
It's not just about the quantity. These agents have fully controllable permissions and improved process efficiency. It is because of the trust foundation established through years of cooperation with Google and their growth in Google's security governance environment. Each agent needs to be registered in the system, and all key operations leave cryptographic - level audit logs. Each cross - agent collaboration is subject to traffic and permission control by the underlying system.
However, compared with these promotion cases of agent applications, a more hidden narrative of Google this time is that it is bringing the entire foundation of the AI industry model into the customer scope, whether they are industry competitors or ecological partners.
Is NVIDIA's chip a competitor to Google's TPU?
Yes, but Google has a dual - supply strategy. In addition to self - developing TPU, NVIDIA's next - generation rack system, NVIDIA Vera Rubin NVL72, will be launched on Google Cloud's system architecture for the first time in the second half of this year, providing top - level high - performance computing power support for Google Cloud. Moreover, Google Cloud and NVIDIA are jointly upgrading the Falcon network protocol, which is equivalent to widening the massive data highway and further reducing enterprise costs.
Are Anthropic, Meta, and OpenAI enemies of Google's Gemini at the model level?
Yes! But the Claude model has been deeply integrated into Google Cloud. Their cooperation has expanded the computing power scale to a 3.5GW AI computing power cluster. Social media giant Meta has also reached a multi - billion - dollar agreement with Google to rent TPU for model training. Google Cloud is providing differentiated computing power options for customers that NVIDIA cannot meet, turning the training foundation of the entire basic model industry into its customers.
This is equivalent to Google selling shovels while digging for gold and also making others buy its shovels while they are digging for gold.
Not to mention Apple's Siri. The new - generation Apple Siri to be released this year is based on Google's Gemini technology. Although Apple restricts Google from accessing the user data of Apple phones and data processing is completed on Apple's servers, this means that Android and Apple, the long - time competitors in the mobile phone terminal market, have become partners in AI capabilities after the search engine field.
Friendship and rivalry may be the main theme of the future AI industry.
For Chinese cloud providers, an important lesson learned from Google is that the zero - sum game of all - or - nothing is a thing of the past. The real moat in the AI era may not be having a better model or higher technical capabilities than competitors but creating a more complex and multi - dimensional game and continuously transforming industry competitors into customers of one's own ecosystem.
This article is from the WeChat official account “Bluehole Business” (ID: value_creation), author: Zhao Weiwei, published by 36Kr with authorization.