Robotaxi "Eyes" Revolution: Three Generations of Lidar Upgrades, Taking Driverless Cars from 0 to 100,000
The future of Robotaxi depends on an ongoing trend of upgrading perception hardware.
The DARPA Grand Challenge from 2004 to 2007 was the origin of today's exciting and booming autonomous driving and intelligent automotive industry.
The teams that dominated the competition, as well as the entrepreneurs who emerged from these teams or were inspired by them, later established a new industry - autonomous driving. Their names are still well - known today: Waymo, Apollo, Pony.ai, WeRide, Momenta...
They created a new species, Robotaxi. Through generations of iterations of AI technology systems, they are getting closer to the ultimate goal of "fully driverless" in terms of technology and are also rapidly approaching the turning point of large - scale deployment in business.
During this process, technology spill - over occurred, directly influencing and initiating the intelligent popularization of the entire automotive industry.
At the industrial chain level, a new sensor device, LiDAR, also made its debut at the DARPA Grand Challenge. Along with the continuous iteration of autonomous driving technology and the form of Robotaxi, it has evolved from an "unexpected surprise" at the beginning, to a "cost challenge" for a period, and finally to an essential underlying component.
Robotaxi and LiDAR, two products originating from the same source, have always complemented and been indispensable to each other. The whole process is a cause - and - effect relationship, and they have advanced in synergy.
Now, the LiDAR for the era of large - scale commercialization and popularization of autonomous driving and Robotaxi is on the horizon:
Whoever can deploy the new - generation high - performance, high - reliability, and low - cost LiDAR first will gain an insurmountable first - mover advantage in future data accumulation and operational efficiency.
The "Twin Growth" of Robotaxi and LiDAR
The DARPA Grand Challenge was held three times in total. In the first challenge in 2004, limited by the software and hardware at that time, none of the 15 finalist teams completed the race. The best - performing team, CMU, only managed to drive about 12 kilometers before crashing, which was less than 1/10 of the race segment.
Although it ended in a fiasco, this competition led to the first cooperation between academia and automakers to solve the problem of driverless technology, directly triggering this round of research on autonomous driving. From this perspective, the competition was undoubtedly a success.
In the second challenge in 2005, the Stanford Artificial Intelligence Laboratory collaborated with Volkswagen, leveraging Volkswagen's resources in Germany. They unprecedentedly installed five single - line LiDARs from SICK on a Touareg SUV:
Although the maximum detection range of the SICK single - line LiDAR at that time was only 25 meters, it still helped the Stanford team complete the entire 212 - kilometer race segment in 6 hours and 54 minutes and win the championship.
The SICK LiDAR originated from a German startup named Ibeo. Due to the unexpected "popularity" of its products at the DARPA Challenge, Ibeo immediately shifted its business focus from traditional industrial mapping to the automotive field, initiating the "symbiotic" relationship between LiDAR, autonomous driving, and Robotaxi.
The results of the 2005 challenge also directly influenced a sound company at that time, Velodyne, to shift all its resources to automotive LiDAR.
L4, aiming for full driverless operation, is fundamentally different from L2 assisted driving. It requires absolute safety of the system and no human intervention throughout the journey. Therefore, the perception system must have extremely high reliability, accuracy, and redundancy guarantee.
From a technical perspective, after the camera collects data, it needs to go through target segmentation, recognition, size calculation, distance and speed calculation, and then compare with the speed and trajectory of the vehicle itself to output a result that can be used as a reference for the planning model.
In addition to latency, this traditional modular autonomous driving algorithm may also have noise and errors. The accumulation of errors in several consecutive models can have a significant impact on the final result.
Compared with the camera, which "passively" receives environmental information, LiDAR "actively" perceives the environment:
The infrared light wave emitted by LiDAR will definitely generate an echo when it hits an obstacle, which is 100% reflected in the point - cloud map, thus avoiding "missed detection" at the perception level.
Since the point - cloud map itself contains depth information, it can directly perform 3D reconstruction of the environment, eliminating the step of reconstructing the scene from image data.
At the same time, after sending and receiving pulses, the system can directly read the distance from the return time and the relative speed from the signal modulation. There is no "recognition" process throughout, which is pure measurement with low noise and simple calculation, and can be completed hundreds of times per second.
Based on its technical characteristics, LiDAR has obvious advantages over the pure vision solution in terms of perception, signal processing, latency, etc. in the autonomous driving system.
So, in the third DARPA Challenge in 2007, five out of the six teams that completed the race used Velodyne's mechanical LiDAR, directly establishing the position of LiDAR in L4 and higher - level autonomous driving.
However, behind this interdependence, there is a huge room for optimization in terms of cost and reliability, which laid the foundation for the common "growing pains" of the two tracks in the future.
Subsequently, the development of LiDAR and Robotaxi has both achieved mutual success and experienced a process of dynamic balance and co - evolution.
The Co - evolution of LiDAR and Autonomous Driving
Strictly speaking, the real origin of Robotaxi was the third DARPA Challenge in 2007. This challenge was themed "Urban Challenge" for the first time, comprehensively verifying the comprehensive capabilities of the autonomous driving system in real - world traffic scenarios, such as perception, recognition, game - playing, planning, and control, laying the basic form of today's Robotaxi:
That is, the hardware system of the driverless vehicle consists of cameras, LiDARs, millimeter - wave radars, wire - controlled systems, computing units, etc., and the software system consists of algorithms such as sensor fusion, target positioning, recognition, path planning, and behavior planning. The combination of software and hardware forms the autonomous driving system.
What later generations have done is nothing more than more in - depth and refined technological iterations on this basic route.
Most of the members of the CMU and Stanford teams, which won the championship and runner - up prizes at the DARPA Challenge, later gathered at the Google Autonomous Driving Project, the predecessor of Waymo, led by Sebastian Thrun, and initiated the first commercial exploration of autonomous driving in human history.
From the start of the Google Autonomous Driving Project in 2009 to 2015, it can be regarded as the road - testing stage of autonomous driving and Robotaxi:
The main challenge for Robotaxi at this stage was to verify the reliability of the technical system in real - world road scenarios, which showed a strong demand for LiDAR performance.
For example, the earliest LiDAR from SICK had a maximum detection range of only 25 meters and only a single line. On the one hand, it limited the maximum speed of the vehicle, and on the other hand, many LiDARs had to be used at once... This is why it took 6 hours to complete the 200 - kilometer race at that time.
The system reaction time, safety redundancy limit, installation form, riding experience, etc. were far from meeting the commercialization threshold.
Therefore, during the road - testing stage, Robotaxi's demand for LiDAR was higher line numbers and longer detection distances.
During this period, Velodyne was once the absolute king. The 64 - line mechanical LiDAR became an essential sensor for all autonomous driving systems in the world. The unit price was as high as one million at its most expensive, and even at its cheapest, it was still "10,000 yuan per line", and it was often out of stock.
Baidu Apollo even directly invested in Velodyne at that time to get the products more conveniently.
However, during this stage, domestic self - owned startups also began to make efforts. Relying on the mature domestic supply chain, they started to challenge the leading players in terms of both technology and cost.
For example, Hesai supplied Pandar64 to Cruise in batches, and RoboSense supplied Ruby128 - line LiDAR to customers such as Momenta and AutoX in batches.
The LiDAR products at this stage solved the problem of the availability of the key sensor for autonomous driving, but they also showed a "pain point": LiDAR became the component with the highest single - item cost in the autonomous driving system.
A driverless vehicle with a cost of one or two million yuan could never replace a traditional taxi with a selling price of only about 100,000 yuan. This was also the driving factor for the first formation of the LiDAR market pattern.
Since 2016, the leading players in the autonomous driving field have started new attempts and explorations in technology and business with the goal of "commercial operation".
First, in terms of technology, it shifted from the previous modular, rule - based, and map - prior approach to a light - map, model - based, and data - driven approach. The generalization ability of the system was unprecedentedly enhanced, and the pre - fusion of different sensors was further improved.
In business, with the goal of "commercial operation", they cooperated with automakers to manufacture Robotaxi models that meet vehicle regulations and are pre - installed for mass production.
Moreover, the advanced concepts and technical systems of L4 began to be accepted by L2. The mass - production of intelligent assisted driving increased on a large scale, progressing side by side with the Robotaxi route.
Vehicle regulations are actually the most stringent requirements put forward by Robotaxi and L2 assisted driving for LiDAR during the trial - operation stage from 2016 to 2024: In addition to stronger performance, it must meet the basic threshold of "no failure or replacement for ten years" in terms of reliability, and also meet a series of safety design specifications for motor vehicles in terms of size and shape.
Almost all the overseas star LiDAR players that were once prominent failed at the "vehicle regulations" level.
Domestic players RoboSense and Hesai took the lead. They quickly met the demand with vehicle - regulated, semi - solid - state products with higher precision and larger field of view, becoming star products.
For example, RoboSense's M1P and Hesai's AT128, etc., and gradually established the perception of "LiDAR = seat belt" among ordinary users.