HomeArticle

The dramatic change in computing power of China's intelligent driving industry

智见Time2025-12-30 18:35
The role of the cloud has become even more important.

In 2025, within the overarching framework of the continuous evolution of autonomous driving, the intelligent driving industry in China witnessed an unprecedented shift in computing power.

The key backdrop for this shift is that, from the perspective of software algorithms, after the entire industry completed the evolution towards the "end-to-end" technological paradigm, major players in 2025 became embroiled in a distinct dispute over software algorithm routes. However, as of now, at least in the Chinese market, neither VLA nor the world model has been able to fully demonstrate its overwhelming advantage in terms of the end-user experience.

But the more crucial pivot of this shift lies in the differentiation at the commercial implementation level of high-order intelligent driving.

For instance, high-order intelligent driving centered around urban NOA has shown a K-shaped differentiation across different price segments. Two distinct vehicle value lines - equalizing intelligent driving and moving towards the high-end - are intertwined amidst market competition. This has inevitably led to fragmentation in the implementation of vehicle-side computing power for high-order intelligent driving across the entire industry.

It's worth mentioning that, due to the continuous evolution of intelligent driving technology, some players have started from L2 and are now laying out for Robotaxi, which belongs to the L4 system. In this scenario, the role of cloud computing power is becoming increasingly prominent.

Of course, behind this seemingly chaotic series of changes, there exists a relatively clear underlying logic: computing power is becoming an increasingly core element in the development of intelligent driving.

In fact, by comprehensively observing from multiple dimensions such as technological evolution, commercial implementation, and industrial differentiation, we can draw a near-certain conclusion from the chaotic industrial landscape: whether on the vehicle side or in the cloud, the demand for computing power in the Chinese intelligent driving industry is constantly increasing.

Computing Power Migration under the End-to-End Framework

On August 26, 2023, Elon Musk conducted an autonomous driving live broadcast in Silicon Valley, USA, which was watched by millions. In this live broadcast, Musk drove a Model S equipped with the FSD Beta V12 software version, which uses Tesla's in - development end-to-end autonomous driving system.

This successful live broadcast can be regarded as a turning point for Tesla's autonomous driving to enter the end-to-end stage.

It's worth noting that in this live broadcast, the Model S driven by Musk was based on Tesla's HW3 computing hardware, so its computing power configuration remained at 144 TOPS. However, it should be noted that at that time, the more powerful HW4 had already begun to be gradually installed in vehicles.

In fact, in Tesla's timeline, from 2019 to 2023, this 144 TOPS had always been the computing power standard during its intelligent driving evolution - even though Tesla's software algorithms had undergone several significant changes during this period. Even in the first half of 2023, when Tesla was developing the end-to-end algorithm software and making initial vehicle - side deployments, its priority was still the 144 TOPS HW3.

However, as Tesla continuously advanced the evolution of the FSD algorithm framework along the end-to-end path, the role of HW3 was gradually replaced by HW4.

In July 2024, Tesla launched the V12.5 version of FSD, which had five times the number of parameters compared to V12.4. The key point is that this version was first optimized for vehicles equipped with the latest HW4 computing power and then for HW3 models. Thus, HW4 became the preferred deployment platform during the evolution of FSD.

In this way, under the end-to-end technological framework, Tesla successfully completed the switch of its computing power platform from HW3 to HW4.

It should be noted that even based on the end-to-end architecture, Tesla has been making significant updates to the FSD software. For example, the number of parameters in FSD increased five - fold from V12.4 to V12.5, and the FSD V13 pushed at the end of 2024 underwent a large - scale code rewrite.

Moreover, in 2025, on the basis of FSD V13, Tesla launched FSD V14 with ten times the number of parameters. The exponential increase in the number of model parameters means that the vehicle can understand more complex environmental information. Meanwhile, Musk has repeatedly revealed on social media that Tesla's self - developed next - generation AI5 computing power platform will have ten times the computing power of HW4 (later named AI4).

Looking back, in China, on the other side of the ocean, intelligent driving players, while continuing to learn from Tesla, did not follow the same path.

In fact, after Tesla firmly switched to the end-to-end path in 2023, Chinese intelligent driving players, including new - force brands like XPeng, Li Auto, and NIO, as well as intelligent driving solution providers like Huawei, Horizon Robotics, and DeepRoute.ai, all embraced the end-to-end approach in 2024, which naturally also had the implication of learning from Tesla.

From the perspective of vehicle - side computing power, these players still preferred to deploy solutions based on existing computing power platforms when laying out the end-to-end path. Among them, the most mainstream solution was undoubtedly the NVIDIA dual Orin - X computing platform with a total computing power of up to 508 TOPS.

However, after entering 2025, as Tesla's FSD continued to evolve towards the V14 version, Chinese players, after collectively realizing the problems and limitations of the end-to-end solution (such as being unexplainable and unable to handle long - tail scenarios), began to explore new algorithm evolution paths - and thus, disputes emerged.

Among them, players represented by Li Auto, XPeng, and DeepRoute.ai publicly chose the VLA (Visual Language Action Model) solution originating from the field of embodied intelligence, while Huawei, NIO, etc. emphasized the world model more.

On the one hand, there are minor or major differences in the specific implementation paths of each company, and on the other hand, there are also considerations for market promotion. It can be said that "a riot of colors is beginning to dazzle the eye." But from the perspective of experience, it's hard to say which company truly has an overwhelming advantage in intelligent driving.

However, despite the differences in software algorithm solutions, based on the overall rhythm of industry development, technological switching, and commercial implementation, major intelligent driving players collectively entered a switching cycle at the vehicle - side computing power level in 2025. Along with this switching process, the entire intelligent driving industry formed three different schools at the vehicle - side computing power level.

Three Schools, Each with Its Own Merits

The leap in computing power is actually an inevitable result of the technological evolution and product implementation of high - order intelligent driving.

For example, in 2022, at the critical juncture when the implementation scenarios of intelligent driving shifted from highway NOA to urban NOA, intelligent driving players including Li Auto, NIO, and XPeng all adopted the NVIDIA dual Orin - X computing power platform in their newly launched second - generation models (NIO equipped four Orin - X chips, but the model mainly ran on two of them).

During this upgrade, the computing power of these models increased exponentially compared to their predecessors, thus supporting the continuous iteration of their algorithms until the end - to - end era. By 2025, as high - order intelligent driving further advanced towards VLA or the world model, the computing power for intelligent driving also witnessed a collective leap.

But this time, there was an obvious divergence in the choices of each company.

Based on the situation in 2025, in terms of vehicle - side implementation, the choices of vehicle - side computing power for high - order intelligent driving mainly fall into three schools: ① Self - developed by automakers; ② NVIDIA - based; ③ Huawei - related and others.

The first school consists of automakers self - developing chips, represented by three new - force automakers: NIO, XPeng, and Li Auto.

NIO first equipped two self - developed Shenji NX9031 chips in the ET9 model delivered at the beginning of the year, and then installed one Shenji NX9031 chip in models such as the 5566 and the new ES8 in the first half of the year. According to NIO's official statement, the computing power of one Shenji NX9031 is equivalent to that of four Orin - X chips.

XPeng's self - developed Turing AI chip has a computing power of 750 TOPS. In July 2025, three of these chips were installed in the XPeng G7 Ultra version. In the newly launched XPeng X9 extended - range version, the Max - version intelligent driving also switched from the dual Orin - X chips to a single Turing AI chip.

Li Auto's situation is a bit special. So far, Li Auto has announced its self - developed M100 vehicle - side intelligent driving chip, but it has not been installed in vehicles yet. Li Auto said that compared with the most powerful chips on the market, the M100 can provide twice the performance when running large models and three times the performance when running visual models. This chip is expected to be deployed in its flagship models and delivered to users next year.

Therefore, so far, Li Auto still chose to install NVIDIA's latest - generation Thor chip in the models launched in 2025 - which is the second school, the NVIDIA school.

As NVIDIA's latest - generation in - vehicle computing platform, NVIDIA Thor can provide several times the computing power of Orin - X and is closely integrated with NVIDIA's DriveOS operating system. It is a popular choice in the market.

In terms of implementation, in addition to Li Auto's models, in the first half of 2025, Thor was installed in models such as the Lynk & Co 900 and the Xiaomi YU7. In the second half of the year, multiple models under the ZEEKR and IM brands were also launched with Thor installed.

It's worth noting that most of the above models adopted the single - Thor solution, and the ZEEKR 9X also offered a Qianli Haohan H9 intelligent driving solution equipped with dual Thor chips.

Of course, it should be noted that in addition to the above automakers with self - developed algorithm capabilities, algorithm providers like DeepRoute.ai and Zhuoyue are also developing high - order intelligent driving solutions based on NVIDIA Thor, and their solutions may have the opportunity to be implemented in more models.

Even though automakers and solution providers are all seeking to install computing platforms with higher computing power, NVIDIA remains the most crucial computing power platform provider for the implementation of high - order intelligent driving in the Chinese market in the existing implementation scenarios.

For example, XPeng Motors, which has self - developed chips, also clearly stated at a press conference that it will still choose NVIDIA chips for some models. And NIO still uses the NVIDIA Orin - X platform in its L60 and L90 models under the LeDao brand.

In addition to the above two schools, there are also suppliers like Huawei and Horizon Robotics that have both software and hardware capabilities.

After achieving success with Hongmeng Intelligent Mobility, Huawei reached ADS solution cooperation agreements with a number of domestic automakers through its Huawei Qiankun brand. However, Huawei itself is not inclined to disclose the computing power of its intelligent driving platform. Therefore, from a technical perspective, Huawei mainly focuses on iterative software algorithm development and strengthens the training of the world engine on the cloud computing power side, and then implements it in ADS 4 - related models.

One small change is that although many models still use the MDC 610, models like the Zunjie S800 have adopted the MDC 810 computing platform with higher computing power.

Of course, apart from Huawei, Horizon Robotics also vigorously promoted its latest J6 chip series through its one - stage end - to - end HSD intelligent driving solution in 2025. Currently, this solution has been successfully implemented in some models of Chery and Changan.

Overall, from the perspective of vehicle - side computing power, the entire market has become more fragmented with the changes in the automotive industry itself. The variables are that Huawei has captured a certain market share with its technological and brand strength, and some new - force automakers have chosen self - development based on their own strategic considerations - the entire industry has thus witnessed an unprecedented change in its landscape.

Computing Power Bottlenecks: From the Vehicle Side to the Cloud

From the perspective of the overall development of autonomous driving, both the vehicle side and the cloud are calling for more and more powerful computing power support.

After all, in the "uncharted territory" of intelligent driving, although there is uncertainty in technological solutions, there is a very clear trend: on the basis of the existing end - to - end perception ability, intelligent driving also needs to be empowered with a more efficient and general "world cognitive ability" - whether it's VLA or the world model, this is an insurmountable threshold.

And cognition is obviously a more complex project.

Meanwhile, in the collective advancement of the intelligent driving industry, players like NVIDIA, Tesla, XPeng, Horizon Robotics, and DeepRoute.ai are all aiming at Robotaxi, which belongs to the L4 framework, on the basis of promoting L2.

In this situation, the computing power bottleneck is becoming increasingly obvious.

From the vehicle - side perspective, many industry insiders have clearly stated that whether it's VLA or end - to - end, the biggest constraint on its actual improvement is still computing power. That is to say, high - computing power itself remains a necessary foundation for the future leap of the intelligent driving industry from L2 to L4.

That's why Musk publicly stated that the computing power of Tesla's AI5 will be nearly ten times that of HW4. Zhou Guang, the CEO of DeepRoute.ai, also said that the next - generation chips will reach the 5000 TOPS level, and the 10000 TOPS level is not far off. Moreover, NVIDIA's next - generation DRIVE vehicle - side computing power platform will, without a doubt, evolve towards higher computing power.

However, compared with the vehicle side, in the process of intelligent driving players promoting the evolution from L2 to L4, the more important competitive dimension is actually in the cloud.

In fact, during the evolution of the entire intelligent driving industry, the iteration and evolution of algorithms from generation to generation are first attempted, trained, and deployed in the cloud. Therefore, cloud computing power itself is the key cornerstone for the evolution of intelligent driving algorithms, and greater cloud computing power is also the core support for future algorithm iterations.

An investor who has long been concerned about the intelligent driving industry told us that one of the important reasons why Tesla has a huge advantage in the evolution of FSD is that, in addition to the excellent algorithm capabilities of its engineering team, it also has more sufficient cloud computing power support, which provides a great advantage in aspects such as data closed - loop, data training, and simulation verification.

The real situation is that after entering the end - to - end technical system, the technological evolution of intelligent driving has put forward higher requirements for data processing and model training, thus triggering a computing power arms race in the cloud.

Regarding the dependence of autonomous driving development on cloud computing power, Wu Xinzhou, the global vice - president of NVIDIA and the head of the automotive business unit, also clearly stated during the 2024 Beijing Auto Show: As an inevitable future, future AI cars will be much simpler to develop than current autonomous driving cars and will be more concentrated in the cloud.

In fact, this competition based on cloud computing power started in 2023 and intensified in 2024. An industry insider told us that even in 2024 when the financial situation was not good, the boss of a new - force automaker decided to increase cloud computing power. As a result, this new - force automaker witnessed a leap - forward iteration of its intelligent driving algorithm in 2025.

In 2025, this cloud - based computing power arms race continued.

An engineer engaged in intelligent driving R & D said that in 2025, cloud computing power was definitely still insufficient. However, for the intelligent driving departments of each company to increase cloud computing power, they were still restricted by the annual computing power budget allocated by the company. With a budget, they could purchase computing power resources from cloud service providers.