RoboSense showcases the world's first AI robot "Delivery Guy" and demonstrates autonomous packing and unpacking operations on-site | The Frontline
Author | Huang Nan
Editor | Yuan Silai
As AI moves from the digital world to physical entities, "how to endow machines with perception and execution capabilities comparable to those of humans" has become the core proposition of the industry. The 2026 Consumer Electronics Show (CES) with the theme of "Smarter AI for All" also marks that the global technology race has entered the era of "embodied intelligence".
At the CES 2026 site, the booth of RoboSense, a leading lidar company, provided a reference answer from underlying perception to high - level intelligence. RoboSense presented its breakthrough achievements in embodied intelligence, full - stack product matrix, and multi - scenario ecological cooperation, comprehensively demonstrating the full - link layout from underlying technology to terminal applications.
As the race in embodied intelligence approaches the industrialization critical point, the AI robot "delivery guy" that made its global debut at RoboSense attracted the attention of many exhibitors. Without human intervention, this robot can independently complete nearly 20 complex operation steps such as gift packing, shelving, transporting, unpacking, delivering, and box recycling, smoothly realizing the long - range task closed - loop.
Can independently complete nearly 20 complex operation steps
Behind this ability is RoboSense's self - developed "hand - eye coordination" technology solution. Centered on the industry's first VTLA - 3D operation large model, this solution integrates force - tactile modality and 3D point - cloud information for the first time. Combining with the Active Camera series of sensors and multi - degree - of - freedom dexterous hands, it constructs a multi - modal fusion perception and execution system. It not only significantly improves the robot's operation success rate in dynamic environments but also reduces the reliance on large - scale training data through the structured information of 3D point clouds, enhancing the model's generalization ability.
On this basis, the system, through a trained task - planning AI, can break down complex and abstract tasks into atomic subtasks and schedule their execution, forming a "dual - system (fast and slow)" collaborative mechanism that takes into account both high - level task planning and low - level fine control. Thus, in a complex and real environment like the CES 2026 site, it can stably and smoothly complete a series of long - range, high - precision flexible operations.
At the hardware level, the Active Camera series of sensors form the "eyes of the robot". The AC2 is responsible for close - range fine - operation perception, while the AC1 focuses on long - range navigation and obstacle avoidance. The multi - degree - of - freedom dexterous hand equipped at the end is equipped with multiple sets of force - tactile lattices, which can effectively make up for the visual blind spots and ensure smooth and accurate operations.
Currently, RoboSense's embodied intelligence solution has formed an end - to - end technology closed - loop, which can be widely applied in multiple scenarios such as logistics, industry, and services.
End - to - end technology closed - loop
At the exhibition site, RoboSense also simultaneously displayed several iterative new products, including the second - generation all - solid - state lidar E1 Gen2, the world's first 3D safety lidar Safety Airy, the ultra - mini digital lidar Airy Lite, and the in - vehicle digital lidars EM4 and EMX, covering the full - scenario needs from consumer - grade robots to intelligent driving.
The rapid implementation and iteration of its products benefit from RoboSense's unique technology system. At the bottom is the independently controllable full - stack chip capability, and at the upper level is a "three - track parallel" product matrix that can flexibly respond to diverse markets. As the world's first company to achieve one million lidars off the production line, its digital lidar solution, through the combination of VCSEL and SPAD - SoC chips, has achieved a coordinated breakthrough in high - line count, small volume, and low power consumption to meet the industry's core requirements for lidar of "ultra - miniaturization, automotive - grade mass production, and high reliability".
According to the Yole report, the global in - vehicle lidar shipments exceeded 850,000 in 2024 and are expected to climb to 12 million by 2030. RoboSense's technology roadmap precisely matches this explosive growth demand.
Relying on technology portability and large - scale delivery capabilities, in the consumer - grade robot field, RoboSense announced a cooperation with Segway Navimow, a leading garden robot manufacturer, and jointly displayed the Navimow i2 LiDAR lawn mower robot integrated with a digital all - solid - state lidar.
Navimow i2 LiDAR lawn mower robot
In the unmanned delivery scenario, the Neolithic L4 - level unmanned logistics vehicle equipped with the digital high - precision lidar Fairy has achieved large - scale mass production and delivery; the Coco Robotics unmanned delivery vehicle equipped with the all - solid - state digital lidar E1R has been widely deployed in North America, becoming a typical commercial case in the overseas market.
In the field of intelligent driving, the flagship SUV Lynk & Co 900 jointly displayed with Geely intuitively demonstrated the application potential of its technology in high - level intelligent driving. This cooperation conforms to the industry trend that after the implementation of L3 - level autonomous driving, lidar has changed from a "high - end optional feature" to a "core standard feature". Currently, it has occupied 26% of the global passenger - vehicle lidar market share and has reached cooperation with 30 global vehicle manufacturers in total, with more than 100 designated models.