HomeArticle

The first year of mass production of L3 autonomous driving brings us one step closer to the dream of L4.

极智GeeTech2025-12-17 16:39
The era of "human-machine co-driving" has arrived.

Recently, the Ministry of Industry and Information Technology has approved the commercial operation of Level 3 autonomous driving for the first time. The two vehicle models that have passed the application for Level 3 autonomous driving access are the Changan Shenlan SL03 and the ArcFox Alpha S6, marking the first time in China that vehicles are allowed to have the system assume driving tasks under specific conditions. It is foreseeable that 2026 will truly become the "Year of Mass Production" for Level 3 autonomous driving.

It is worth noting that this time, the division of rights and responsibilities for Level 3 autonomous driving has been clarified: When a vehicle is autonomously driving on a limited - section road at a speed of no more than 80 kilometers per hour, in the event of an accident, if the system is in the activated state, the automaker may bear the primary responsibility. At the same time, the access requirements state that the sensing equipment of Level 3 autonomous driving vehicles must be "factory - installed for mass production", and retrofitted vehicles cannot obtain the pilot qualification, ensuring the stability of the technology from the source.

The industry generally believes that Level 3 is an important transition from "assisted driving" to "fully autonomous driving". The subsequent Level 4 autonomous driving will achieve greater breakthroughs - within a fixed area, vehicles can completely operate without human intervention, truly realizing driverless driving.

This small step is actually the result of a decade - long global technological game. Germany passed the "Autonomous Driving Act" as early as 2021, clearly stating that the automaker is responsible for accidents during the activation of the Level 3 system and requiring vehicles to be equipped with a "black box" to record operating data. Mercedes - Benz's Drive Pilot system was subsequently launched on German highways, becoming the world's first commercially available Level 3 product. In contrast, although China's access to this field started a bit later, it has directly addressed the core issue of responsibility. Instead of taking the old "testing" route, it has directly launched conditional commercial operations.

However, the real challenge lies in building trust in human - machine co - driving - when will the system disengage? Can the driver take over in time? Future traffic governance will redefine the boundaries between machine compliance and responsibility division.

Level 3 Sees the Dawn of Scale - up

At the Shanghai Auto Show in April this year, Huawei, together with 11 automakers including Seres, Avita, Chery Automobile, BAIC New Energy, Voyah, Jianghuai Automobile Group, SAIC Group, and GAC Group, talked about Level 3 in front of the live CCTV cameras. These automakers basically include the four major central state - owned enterprises and representatives of new - force brands in the Chinese automotive industry, accounting for almost "half of the country" in the Chinese new - energy vehicle industry.

Many automakers have set the goal of achieving the implementation of Level 3 conditional autonomous driving by 2025. A relevant person in charge of XPeng Motors recently stated that it has obtained a Level 3 autonomous driving road test license in Guangzhou and has launched regular Level 3 road tests. In 2026, XPeng Motors plans to launch models with both hardware and software reaching the Level 4 autonomous driving level in mass production.

Three automakers, Chery, GAC, and Zeekr, have disclosed the mass - production schedules for Level 3 conditional autonomous driving. GAC Group has released the "Xingling Intelligent Mobility" and announced the launch of mass - production and sales of its first Level 3 autonomous driving vehicle model in the fourth quarter of this year. Chery Automobile announced that it plans to mass - produce Level 3 autonomous driving vehicles in 2026 and has released the Falcon Intelligent Driving system. The Falcon 900 is equipped with a new - generation intelligent driving system of VLA + world model, with an AI computing power of up to 1000 TOPS and capable of Level 3 autonomous driving.

When Level 2 has become the standard and Level 4 is still in the technological exploration stage, the once - called "uncanny valley" crossing path of Level 3 has finally seen the dawn of large - scale breakthrough.

The "Classification of Automobile Driving Automation" (GB/T40429 - 2021) issued by the Ministry of Industry and Information Technology shows that automobile driving automation technology is divided into six levels from L0 to L5. Among them, Level 3 is defined as conditional autonomous driving, which means that under specific conditions, the vehicle can independently complete all driving tasks, and the driver becomes a supervisor, only intervening when requested by the system.

During the Level 2 assisted driving stage, the driver still firmly holds the dominant power of driving, and the system only assists in completing some tasks in specific scenarios. Adaptive Cruise Control (ACC) can automatically adjust the vehicle speed according to the speed of the vehicle in front, realizing automatic following and relieving the fatigue of the driver's right foot during long - distance driving; Lane Centering Control (LCC) keeps the vehicle steadily in the center of the lane, reducing safety hazards caused by lane departure; Automatic Parking Assist (APA) is a blessing for novice drivers, as it can automatically plan the parking route and easily park the vehicle.

However, during the Level 2 stage, the driver needs to constantly monitor the vehicle, cannot keep their hands off the steering wheel for a long time, and be ready to take over the vehicle at any time. For example, when using the adaptive cruise and lane - keeping functions on the highway, once encountering complex road conditions such as a traffic accident or road construction ahead, the system cannot make a reasonable decision, and the driver must intervene immediately.

Level 3 autonomous driving means that vehicles can achieve conditional autonomous driving on specific road conditions such as urban expressways and highways, and the vehicle can continuously execute all dynamic driving tasks under specific conditions.

In terms of bearing driving tasks, the Level 2 intelligent driving system only assists the driver in completing some tasks. The driver is still the main operator of driving and needs to constantly pay attention to the road conditions and be ready to take over the vehicle at any time.

Under specific conditions in Level 3, the vehicle's autonomous driving system can independently complete all driving operations, and the driver's role changes from the main operator to a supervisor.

However, this does not mean that the driver can completely stay out of the picture. When the system detects complex situations that are difficult to handle, such as severe weather like heavy rain or snow that seriously obstructs visibility, or special scenarios such as road construction or traffic control, it will issue a takeover prompt in advance. At this time, the driver must respond quickly and regain control of the vehicle to ensure driving safety.

The key question is, when will the system disengage? Can the driver take over in time? International data shows that users over 50 years old need an average of more than 6 seconds to regain control of the vehicle from being distracted, and the reaction window left for the driver after the system issues a takeover request is usually less than 10 seconds. More realistically, in low - frequency activation scenarios (some studies suggest that the available time for Level 3 on urban roads is less than 23%), drivers are prone to develop dependence or slackness, which actually amplifies risks.

The Key Technological Springboard for Autonomous Driving

This year, the battle for intelligent driving in the automotive industry has been more intense than ever. The trends of mainstream automakers such as BYD's Tian Shen Zhi Yan, Geely's Qian Li Hao Han, Chery's Falcon Intelligent Driving, and GAC's autonomous driving plan all indicate that in today's automotive circle, "those who master intelligent driving will rule the world".

Since 2023, after the intelligent driving industry set off the waves of BEV and end - to - end technologies, automakers have gradually integrated AI neural networks into the perception, planning, and control processes. Compared with traditional rule - based solutions, the "end - to - end" approach driven by AI and data has a higher ceiling of capabilities.

In addition to the end - to - end model, automakers have also supplemented with external models such as large language models and VLM models to provide stronger environmental understanding capabilities, thereby increasing the upper limit of intelligent driving capabilities.

Meanwhile, VLA is becoming an important part. The VLA model has higher scenario reasoning and generalization capabilities, which is of great significance for the evolution of intelligent assisted driving technology. In the long run, during the technological leap from Level 2 assisted driving to Level 4 autonomous driving, VLA is expected to become the key springboard.

In terms of enhancing vehicle intelligence, new - force automakers are the most radical. At the NVIDIA GTC 2025 conference, Li Auto released a new - generation autonomous driving architecture - MindVLA. By integrating spatial intelligence, language intelligence, and behavioral intelligence, it endows the autonomous driving system with 3D spatial understanding, logical reasoning, and behavior - generation capabilities, and plans to apply it in mass production in 2026.

Before VLA, the "end - to - end + VLM" had always been the mainstream technical solution in the intelligent driving industry. Since driving requires a multi - modal perception and interaction system, the user's vision, hearing, changes in the surrounding environment, and even personal emotional fluctuations are all closely related to driving behavior. In the "end - to - end + VLM" technical architecture, the end - to - end system is responsible for the entire process of perception, decision - making, and execution, while the VLM serves as an auxiliary system, providing understanding and semantic analysis of complex traffic scenarios, but the two are relatively independent.

For example, Li Auto's "end - to - end + VLM" dual - system architecture solution is based on the theory of two human thinking systems proposed by Daniel Kahneman in "Thinking, Fast and Slow". It integrates the end - to - end system (equivalent to System 1) and the VLM model (equivalent to System 2) into the autonomous driving technology solution, endowing the vehicle - end model with a higher performance ceiling and development potential.

Among them, System 1, the end - to - end model, is an intuitive and fast - reaction mechanism. It directly maps sensor inputs (such as camera and lidar data) to the output of the driving trajectory without an intermediate process, and is an integrated One Model. System 2 is implemented by a 2.2 - billion - parameter VLM large visual - language model, and its output is combined with System 1 to form the final driving decision.

XPeng Motors divides the cloud - based model factory into four workshops for pre - training, post - training, model distillation, and vehicle - end deployment of the model in sequence. Li Auto chooses to first pre - train the visual - language base model, then conduct model distillation, and finally perform post - training and reinforcement learning through driving scenario data. The two different technical routes bring different training costs and efficiencies, and it is this difference that makes the two automakers form a strong contrast in the market.

Although the "end - to - end + VLM" has significantly improved the intelligent driving level, there are still many problems. For example, it is difficult to jointly train the end - to - end and VLM models. In addition, there are problems such as insufficient understanding of 3D space, lack of driving knowledge and memory bandwidth, and difficulty in handling the multi - modality of human driving.

VLA, through a unified large - model architecture, seamlessly connects perception, decision - making, and execution, forming a closed - loop of "image input - semantic understanding - human - like decision - making - action output", which can simultaneously increase the upper and lower limits of intelligent driving and achieve the unity of space, behavior, and language.

In terms of reasoning, the capabilities of the VLA model are far higher than those of the "end - to - end + VLM". The VLA integrates the perception capabilities of the VLM and the decision - making capabilities of the end - to - end model and also introduces the "chain of thought" technology. This enables it to have global context understanding and human - like reasoning capabilities and can think and judge like a human driver when facing special scenarios such as complex traffic rules, tidal lanes, and long - term sequence reasoning.

For example, in terms of reasoning duration, the traditional rule - based solution can only reason about 1 second of road - condition information and make decision - making control; the end - to - end 1.0 system can reason about the road conditions in the next 7 seconds, while the VLA model can reason about the road conditions for dozens of seconds, significantly improving the decision - making ability and adaptability of the intelligent assisted driving system.

For this reason, VLA is considered by the industry to be the main technological form of end - to - end 2.0. Currently, VLA is still in the development stage. In addition to DeepMind's RT - 2, it also includes models such as OpenVLA, Waymo's EMMA, Wayve's LINGO - 2, and NVIDIA's NaVILA. Among them, Waymo's EMMA and Wayve's LINGO - 2 are mainly targeted at the in - vehicle field, while RT - 2, OpenVLA, and NaVILA are mainly targeted at the robotics field.

The Increasingly High Threshold of Vehicle Intelligence

For automakers, self - developing every part of vehicle intelligence as much as possible so as to have a thorough understanding of every aspect of the vehicle intelligent system is the confidence of each automaker.

In the era of traditional automobiles, vehicle manufacturers did not develop software but relied on suppliers to provide "black - box" solutions integrating hardware and software to achieve defined functions. However, with the advent of the AI era, the central integrated electronic and electrical architecture, high - computing - power chips, and large models have been successively introduced into vehicles. Automobiles have changed from "electromechanical products" to "intelligent agents", and user needs and experiences have been redefined.

Users' focus on intelligent driving technology has long shifted from "whether it can drive" to "whether it can drive safely". For example, in complex scenarios such as yielding to pedestrians at crosswalks and lane - changing of large vehicles at intersections, the vehicle can allow users to better understand the system's decision - making process through real - time interaction and visual display of actions; when encountering abnormal operations, the vehicle can promptly explain the system's judgment basis and response measures to users; in intelligent driving scenarios, the vehicle can automatically adjust the lights according to the driving state, to a certain extent, convey the driving intention to surrounding vehicles and pedestrians, and enhance driving safety.

For different automakers, the projects more suitable for self - development mainly include three categories. The first is core competitive technologies, such as power battery technology, electric drive systems, and autonomous driving algorithms, which are directly related to vehicle performance and safety and are the keys to enhancing brand competitiveness; the second is differentiated technologies, that is, technologies that can be significantly different from competitors, such as unique user - interface designs and Internet - of - Vehicles services, which can enhance consumers' brand loyalty; the third is high - cost technical components. Self - development can reduce dependence on external suppliers and lower costs, such as batteries and high - performance autonomous driving chips.

Although self - development by automakers is becoming a trend, this path is not necessarily smooth and is often accompanied by high R & D costs, long - term technological accumulation, and unknown market risks. For most automakers, finding the best balance between self - development and outsourcing, ensuring technological leadership while controlling costs, is a test. In addition, with the continuous iteration of technology, how to maintain the continuous innovation ability of self - developed technologies and avoid being eliminated by the market is also an important issue that automakers need to face.

Since self - development projects require a large amount of time and capital investment, automakers must find a balance between independent R & D and technological cooperation. Therefore, automakers first need to make strategic plans and set priorities, clarify the