Luobo Kuaipao might have made a mistake with the "sensitive skin".
A few days ago, several Apollo Go driverless cars in Wuhan suddenly "stalled" on the elevated roads and main thoroughfares. The group of cars seemed to have been hit by a pause button and stopped motionless in the middle of the road. Fortunately, there were no safety issues for the people, and the passengers inside the cars got off safely without any problems.
However, a barrage of scolding came again: "Starting early but arriving late", "The safety of autonomous driving is not up to standard", "Is this all Baidu's technology can do?"
To be honest, simply criticizing cutting - edge technology might be the safest business on the Internet. One can earn a huge amount of traffic and also be protected by the so - called "speaking up for consumers" political correctness.
But this time, it might be different. In terms of the technical issues of autonomous driving safety, there might be few mistakes. What really led to this "stalling" incident is most likely the "sensitive setting" in the safety strategy.
It's not a limitation of ability, but a safety choice
Let's first talk about a concept. If an L4 - level autonomous driving vehicle breaks down, what should be the best way to handle it?
Actually, the first point to determine is: Why is it crucial to clarify the L4 - level autonomous driving?
Because there is a difference between the liability subjects of conventional L3 and below - level autonomous driving and L4 - level autonomous driving.
If we relate this problem to our daily lives, for the currently available and civilian - use assisted - driving models, which belong to the intelligent assisted systems below L3, the liability attribution is as follows: "For L3: When the system is activated, the operator/manufacturer is responsible; if the driver fails to take over as required, the driver bears corresponding responsibility." Ignoring all the technical details, we only need to understand one thing: If a traffic accident occurs, the first person responsible for the accident is the driver himself.
Actually, current manufacturers are quite shrewd. When the assisted - driving system encounters a situation it can't handle, they make the vehicle system withdraw. So, the person responsible for the accident is still the driver.
For the so - called real "autonomous driving" vehicles, that is, vehicles where no one needs to sit in the driver's seat, according to relevant policy regulations, the liability division is very clear: It belongs to the manufacturer.
The specific document is called "Implementation Guidelines for the Pilot Program of Access and Road - going of Intelligent Connected Vehicles". Students who are interested can check it out by themselves.
Based on this liability division, we can understand that large - scale autonomous driving vehicles need to ensure overall safety. Because if a similar situation occurs to the vehicles, and if a secondary accident occurs due to subsequent operations such as pulling over or other actions, the consequences will be difficult to assess.
Manufacturers can only solve such accidents at two levels: the cloud and the vehicle end.
Judging from the whole incident, it's quite clear that it was the cloud that issued the parking instruction.
There are many doubts: Why not let the cars pull over? Isn't that safer?
Not necessarily.
After the cloud issues the instruction, all the terminal vehicles stop. If they need to pull over, the vehicle end needs to operate the pulling - over instruction by itself. But the problem is, without a safety officer in the car, who will be responsible for the safety of the users in the car and the overall driving?
Some people even said when evaluating this accident that the technical redundancy of a single vehicle was not enough to pull over. Let's take a look at the capabilities of Apollo Go's single vehicle:
• 4 Hesai AT128 lidars (128 - line, 200 - meter detection)
• 12 cameras + 6 millimeter - wave radars + 12 ultrasonic radars
• 1200 TOPS computing power (dual Orin X chips)
This set of capabilities is more than sufficient for pulling over and is fully capable of supporting single - vehicle intelligence: perceiving the surrounding environment, identifying available parking areas, and autonomously pulling over. However, Baidu's architecture design allows these hardware devices only to upload data, and the final decision must be sent back to the cloud.
So, an emergency stop is exactly the safest operation at this time. Because this is not limited by technology at all, but the last step in an extreme situation.
It also happened in San Francisco, for the same reason
The collective standstill of autonomous driving is not unique to China.
In December 2025, there was a large - scale power outage in San Francisco. Hundreds of Waymo driverless cars were stuck at intersections. Some stopped in the middle of the road, and some were stuck at crossroads, forcing drivers to take detours. Waymo explained afterwards that the vehicles could indeed treat the non - working traffic lights as four - way stops. However, due to the prudent strategy in the early deployment, the vehicles would send a "confirmation request" to the remote team before execution. The power outage was so large - scale that the number of confirmation requests increased sharply, and the remote system couldn't handle them all, so the cars got stuck.
Waymo's autonomous driving vehicles stalled on the street when the traffic lights didn't work
Familiar? It's almost the same story as Apollo Go: the remote link became the bottleneck, and the prudent strategy turned into a systematic paralysis in extreme situations.
Let's look at another extreme. From June 2025 to the present, Tesla's Robotaxis in Austin have had a total of 14 collisions, and the accident rate is about 4 to 8 times that of human drivers. Tesla uses pure vision and an end - to - end neural network, and the vehicle end makes decisions completely autonomously. The problems occur not because the cars "dare not move", but because they "dare to drive too much".
Three companies, three strategies, three costs. Tesla gives more power to single vehicles, resulting in a high accident rate; Waymo is in between, and its conservative confirmation mechanism collapses in large - scale abnormal situations; Apollo Go is the most extreme - with centralized cloud control, and the whole system enters "protective shock" at the slightest stimulus.
So, the conclusion is very clear. Apollo Go's safety strategy belongs to the "overly cautious" school, that is, the so - called "sensitive setting" - when a problem occurs, stop and solve it.
This kind of performance has indeed led to an unsightly result. Although the hardware redundancy of a single vehicle far exceeds that of Waymo and Tesla, the decision - making power is locked by the cloud. It's like equipping a soldier with a gun, but the trigger is remotely controlled by a commander thousands of miles away - not that the soldier can't shoot, but that they are not allowed to shoot on their own.
At the slightest stimulus, the whole system enters "protective shock".
However, the advantage is that if I were a passenger sitting in the back, I would prefer the car to stop steadily when such an accident occurs, so that I can get off quickly. This should be the optimal solution for both the passenger and the overall safety.
Why isn't Apollo Go allowed to make mistakes?
Interestingly, autonomous driving is actually the most cutting - edge projection of AI capabilities in the real world.
When people are still cheering for the emergence of intelligence in large models and tolerating their clumsiness and mistakes, after all, it doesn't have much impact on individuals.
However, when autonomous driving really starts to serve people and change the way of social travel, it is criticized for being clumsy because it implements the safest strategy.
It's easy and exciting to criticize Apollo Go for "poor technology", but it might be wrong.
When Apollo Go, Pony.ai, and WeRide stand in the most prominent positions in this industry, people will start to be overly sensitive at the slightest sign of trouble. And the path they have chosen is actually the most difficult one.
The most difficult part of autonomous driving has never been to make the car move, but to make the car stop in the safest way.
This article is from the WeChat official account "Xiangxianzhi". Author: Aka. Republished by 36Kr with permission.