HomeArticle

Huawei strengthens its hardware, while XPeng breaks through with algorithms. Who will define the future of intelligent driving?

出行一客2026-03-05 13:48
Intelligent driving competition: Technology takes the lead at the beginning of the year

In the first week of March 2026, the Chinese intelligent vehicle sector entered a period of high - density technological competition.

On March 2nd, XPeng Motors released its second - generation VLA (Vision - Language - Action) large - scale model. On March 4th, Huawei Hongmeng Smart Mobility held a technology innovation press conference, and Huawei launched an 896 - line lidar. On March 5th, BYD's Mega Flash Charge 2.0 was gearing up.

Against the backdrop of Tesla's FSD (Full Self - Driving) approaching the Chinese market and the global acceleration of autonomous driving implementation, leading domestic enterprises are taking the lead in technology, pushing L3 and L4 - level autonomous driving from concept to the verge of large - scale application.

XPeng Motors defines this year as a "watershed moment" for intelligent driving technology, emphasizing "even moms will love to drive". Huawei uses the 896 - line lidar to enhance the recognition of low - lying obstacles at night, proving that the hardware redundancy route still has technological depth. Li Auto and NIO are simultaneously advancing their self - developed solutions, but the industry consensus is becoming more consistent: users no longer simply focus on parameter stacking but shift their attention to the usability, safety, and daily experience of intelligent driving.

1

Huawei's 896 - line Lidar Enhances Recognition of Irregular Obstacles

On March 4th, Huawei Hongmeng Smart Mobility's technology innovation press conference introduced a new - generation dual - optical - path image - level 896 - line lidar. This is another generational breakthrough in the core perception layer of Hongmeng Smart Mobility after the 192 - line lidar and the distributed 4D millimeter - wave radar matrix, directly raising the mass - production lidar specifications in the global industry to a new height.

On that day, Jin Yuzhi, CEO of Huawei's Intelligent Automotive Solution BU, and Yu Chengdong, Chairman of Huawei's Consumer BG, made a rare joint appearance, sending a clear signal that Huawei's technology supply end and brand output end are working together. Regardless of the internal division of labor, Huawei's automotive business participates in market competition with a unified technology base and brand synergy.

"From start to finish, we don't just have lidar; it's a multi - sensor fusion system. By combining the advantages of these sensors, we can overcome the major defects of pure vision in backlight, low - light, irregular obstacle, and rain - fog - dust scenarios," said Jin Yuzhi.

While the industry mainstream is still stuck at the 192 - line lidar, Huawei has directly advanced the perception hardware to 896 - line, achieving a leap from "point - cloud level" to "image level". The new - generation lidar adopts a dual - optical - path architecture, integrating wide - angle and long - focal - length dual - focal - segment receiving units to form a high - definition "picture - in - picture" imaging effect, with the imaging resolution four times higher than the previous generation.

Actual test data shows that the minimum target height that the radar can recognize has been reduced from 30 cm to 14 cm, corresponding to the chassis height of most household cars; the maximum recognition distance has been increased from 100 meters to 162 meters; in a dark environment without light, the recognition distance of low - reflectivity targets has been increased from 42 meters to 122 meters. Both the perception accuracy and distance are at the top level in the industry.

(Source of small obstacle scene recognition: the enterprise)

It is reported that at a distance of 120 meters, the system can accurately recognize traffic cones, gravel, and even small suspended obstacles. The recognition distance of irregular obstacles such as lying traffic cones and fallen tires has been increased by 77%, and the recognition distance of low - reflectivity targets such as black vehicles has been increased by 190%. The dual - optical - path imaging is close to the visual effect and is not affected by light, providing physical - layer redundancy for L3+ level autonomous driving and significantly reducing the probability of false braking and missed detection.

(On - site demonstration of 896 - line lidar imaging. Source: the enterprise)

In the on - site demonstration, the radar can clearly recognize the dynamic details of pedestrians and small animals in a night environment 55 meters away: a person with three small dogs, and even the wagging of the dogs' tails can be seen.

"This was unimaginable with our previous lidar," said Jin Yuzhi. "The previously commercialized 192 - line lidar was already the best in the industry, but the recognition accuracy for small moving objects like a dog's tail is completely different now."

Jin Yuzhi cited an accident that occurred during the Spring Festival: A vehicle equipped with Qiankun Intelligent Driving passed through a long, thick - fog section in Anhui. The system had reduced the speed to 60 km/h. However, the driver thought it was safe and manually increased the speed to 100 km/h. Eventually, when the driver saw a stationary vehicle ahead in the thick fog, it was too late to brake, resulting in a chain collision. "In fact, the sensor had reached its limit and controlled the speed in a safe state, but excessive speeding by the driver brought risks," said Jin Yuzhi.

According to Yu Chengdong, as of March 3rd, the cumulative delivery volume of Hongmeng Smart Mobility had exceeded 1.28 million units, ranking first in the transaction average price of Chinese automobile brands for 14 consecutive months. Among them, the cumulative delivery of AITO M9 exceeded 280,000 units; the cumulative delivery of Zunjie S800 exceeded 15,000 units in 9 months after its launch.

It is reported that Hongmeng Smart Mobility's active safety system has cumulatively avoided more than 3.54 million potential collisions, and Qiankun Intelligent Driving has accumulated 8.76 billion kilometers of assisted - driving mileage. The safe driving mileage in the ADS assisted - driving and manual - driving modes has reached 3.95 times and 2.81 times the industry average respectively, providing data support for the hardware redundancy route.

There are two models in the first batch to be equipped with this aurora lidar: Zunjie S800 and AITO M9. Compared with models equipped with 192 - line lidar, the price of Zunjie S800 is increased by 20,000 yuan, with a starting price of 728,000 yuan; the price of AITO M9 is increased by 10,000 yuan, with a starting price of 479,800 yuan.

2

XPeng's VLA Skips L3 and Goes Directly from L2 to L4

(XPeng Motors releases the second - generation VLA. Source: the enterprise)

In the same week when Huawei showed its hardware cards, XPeng offered another solution. On March 2nd, XPeng released its second - generation VLA large - scale model.

This is a native multi - modal physical - world large - scale model that does not rely on high - line - count lidar. Through the integrated fusion of visual, auditory, and semantic information, it can achieve anthropomorphic decision - making such as recognizing irregular vehicles, detouring accident scenes, and yielding to animals at night. Actual tests during the evening rush hour in Guangzhou show that its traffic efficiency is better than traditional L2 intelligent driving and Robotaxi solutions, with the comprehensive driving efficiency increased by 23%, breaking the upper limit of intelligent driving experience through algorithm iteration.

While Huawei is increasing hardware redundancy in the perception layer, XPeng is betting on the evolution of large - scale models in the decision - making layer. Although the two routes seem to diverge, they both point to the ultimate goal of large - scale implementation of L4 - level autonomous driving.

Regarding the implementation path and schedule of L4, XPeng Motors has chosen to "skip L3 and go directly from L2 to L4".

He Xiaopeng clearly put forward this judgment at the press conference and submitted relevant proposals to the 2026 National Two Sessions. The core logic is that L4 will bring fundamental changes to the responsible entity, while L3 has transitional difficulties at the hardware, software, and regulatory levels.

He Xiaopeng believes that "based on the current global technological development, basically the next step after L2 is L4. Adding an L3 in between actually poses challenges to hardware, software, and laws and regulations."

Currently, XPeng's second - generation VLA still belongs to the category of L2 - level assisted driving, but it already has software capabilities close to L4. The Robotaxi equipped with this system has started public road tests, with plans for trial operation in 2026 and global delivery in 2027. Volkswagen has become its first - launch customer.

In terms of computing power, XPeng has stepped out of the industry's "computing power arms race". Through the joint optimization of chips, compilers, and models, it has increased the compilation efficiency of the base model by 12 times, achieving a higher level of effective computing power release.

Liu Xianming, the person in charge of XPeng's General Intelligence Center, explained that the value of computing power lies in its coordinated matching with information input and the capabilities of large - scale models, rather than simple numerical stacking. It is worth noting that the VLA base model has cross - domain expansion capabilities and can simultaneously empower embodied intelligent terminals such as flying cars and humanoid robots. He Xiaopeng predicts that in the next 3 - 5 years, cars will become super - intelligent carriers.

Huawei's path is to provide a safety guarantee for assisted driving through hardware redundancy. The value of launching the 896 - line lidar is that when the visual sensor fails due to environmental interference, the system still has a reliable physical perception layer to rely on. Jin Yuzhi previously emphasized that the core of fusion perception is the combination of "seeing" and "seeing clearly", rather than the choice of a single solution.

The divergence of the two routes essentially reflects different judgments on the implementation conditions of L4. XPeng believes that the qualitative change in the algorithm is sufficient to cross the hardware threshold, while Huawei believes that the ultimate redundancy of hardware is the bottom line of safety. However, whether it is "image - level perception" or "multi - modal large - scale model", it ultimately has to be tested by user experience.

In the long journey towards L4, the boundaries of technology and human responsibility still need to evolve together. During this year's Two Sessions, Lei Jun, the founder, chairman, and CEO of Xiaomi Group, reminded consumers: "Today's assisted driving still highly depends on human drivers, so when driving, you must hold the steering wheel, keep your eyes on the road ahead, and drive safely."

This article is from the WeChat official account "Financial Auto", written by Wang Xin, Song Liwei, and Zhao Cheng, and is published by 36Kr with authorization.