HomeArticle

The "pixel race" of solid-state lidar accelerates, and RoboSense releases its VGA large-area array SPAD-SoC product | The front line

黄 楠2026-04-23 17:37
LiDAR is evolving from a "functional component" to an "intelligent perception module".

Author | Huang Nan

Editor | Yuan Silai

Recently, RoboSense officially launched a new "Genesis" digital architecture and two flagship chips based on this architecture. The "Genesis" architecture is a rapidly iterable SPAD - SoC chip - level solution platform, which can provide core support for the large - scale and high - performance iteration of lidar.

The Phoenix chip launched based on this architecture is the world's first automotive - grade SPAD - SoC with monolithic integration of native 2160 lines. It achieves a resolution of over 4 million pixels and a detection range of up to 600 meters, and will be mass - produced for vehicles within 2026. The other, the Peacock chip, is the industry's first mass - producible 640×480 resolution all - solid - state large - area array SPAD - SoC, and is planned to be mass - produced in the third quarter of 2026. This launch means that the core competition of lidar has focused on chips. By following Moore's Law through the digital architecture, it can continuously improve performance while optimizing costs.

The scene of RoboSense Tech Day

While the industry is still using "line count" to measure the performance of lidar, RoboSense sent a clear signal at the 2026 Tech Day: Solid - state lidar is moving from "sparse point cloud" to "image - level perception".

In the past few years, the pain points of solid - state lidar have been concentrated in two major dimensions. One is the insufficient resolution and sparse point cloud, which is difficult to meet the requirements of close - range fine perception. The other is the difficulty in balancing the field of view and blind spots. Especially in the blind - spot compensation and robot scenarios, low - lying obstacles and suspended objects often become "perception black holes". The Peacock chip launched by RoboSense this time is a solution - level response to these pain points.

The Peacock chip integrates a 640×480 ultra - high - density SPAD area array, achieving VGA - level resolution and about 300,000 pixels. It can output a dense three - dimensional depth image. Compared with the previous - generation 144×192 large - area array product with about 27,600 pixels, its performance has been improved by more than 10 times.

Previously, the resolution of the mainstream blind - spot compensation lidar in the industry generally remained at the QVGA (320×240) or even lower stage. The significant leap in VGA - level point cloud density means that the lidar no longer outputs a sparse outline, but a depth image that can distinguish the edges and structures of objects. For example, at a distance of 10 meters, the low - resolution solution can only capture a cluster of point clouds of a pedestrian, while the VGA - level solution can distinguish the relative positions of the head, shoulders, and limbs.

In the vehicle blind - spot compensation scenario, this improvement in resolution can be directly translated into the stable recognition ability of small targets such as children, pets, traffic cones, and tire debris. Combined with its ultra - wide field of view of 180°×135° and a minimum detection distance of less than 5 cm, when the vehicle is parking at low speed or passing through complex intersections, it can cover both the near - field area of the vehicle body and the lateral far - field area, avoiding the trade - off between detection distance and near - field blind spots, and between field of view and angular resolution that traditional blind - spot compensation lidar often faces.

In the robot scenario, the capabilities of the Peacock chip go far beyond obstacle avoidance. In the past, mobile robots mostly used low - line - count lidar for navigation and mapping. However, in operation tasks such as grasping and assembly, additional high - precision vision or structured light sensors need to be configured. The two systems work independently, and it is difficult to unify the coordinate system and time delay. The VGA - level depth map output by the Peacock has a spatial resolution close to that of an entry - level depth camera and comes with millimeter - level ranging accuracy, enabling the robot to complete both positioning and operation perception in the same sensor data stream. This data unification of "movement + operation" can significantly reduce the system complexity and calibration error in scenarios such as industrial robotic arms grasping irregular workpieces and service robots identifying objects on the desktop.

RoboSense Peacock chip

The Peacock chip also shows obvious improvements in field of view and accuracy. The ultra - wide field of view of 180°×135° is one of the wider specifications among current all - solid - state lidars. Coupled with a minimum detection distance of less than 5 cm, it can effectively solve the problem of perception blind spots within the range close to the vehicle body or robot body, which is particularly important for scenarios such as identifying low - lying ground locks during parking and robots working close to the wall.

In terms of accuracy, the Peacock chip is equipped with a high - precision TDC (Time - to - Digital Converter) and a dedicated ranging processing engine, which improves the ranging accuracy to the millimeter level, a 6 - fold improvement compared with the previous - generation product. In addition, the support for a variable frame rate of 10 - 30 Hz enables it to align with the output frame rate of mainstream vehicle - mounted cameras, reducing the engineering optimization difficulty of multi - sensor time synchronization.

Currently, the lidar industry is undergoing an evolution from "functional components" to "intelligent perception modules". VGA is just the starting point of image - level perception. As the requirements of physical AI for three - dimensional data accuracy continue to increase, perception solutions with higher pixels, higher frame rates, and more integration will continue to evolve.

RoboSense has taken the lead in completing the commercial jigsaw of the VGA large - area array on the solid - state route through the "Peacock" chip, providing a hardware base with clear performance and scalable replication, which helps it establish a first - mover advantage in market application areas such as vehicle blind - spot compensation and robots.

In addition, RoboSense is also promoting the layout of fusion sensors in parallel. The AC1 and AC2 active camera series launched in 2025 adopt a technical route that combines high - resolution CMOS and self - developed SPAD, providing the industry with diverse RGBD solution options. At the Tech Day event, RoboSense CEO Qiu Chunchao demonstrated a set of 2K near - infrared images directly sensed by the Phoenix chip and generated by real - time scanning. The grayscale information and three - dimensional distance information are output synchronously from the same source, with a resolution of 2160×1900.

2K near - infrared images directly sensed by the Phoenix chip and generated by real - time scanning

RoboSense revealed that its real RGBD sensor will be launched by the end of 2027. The pixel density and integration of the SPAD chip will be further improved, achieving dual information output of "color + depth" at the pixel level.