StartseiteArtikel

Why "verticalize" the eyes? At this press conference of Chenjing Technology, the truth about robot perception was revealed.

晓曦2026-02-03 16:49
The upper limit of perception ability will determine the ceiling of robots.

On January 28, 2026, Chenjing Technology held a highly anticipated new product launch event in Hangzhou. At the event, there was no amazing "skill - showing" performance by robots that would astonish the public, no fancy backflips, and no precise needle - threading demonstrations. Instead, this launch event was more like an unveiling ceremony for the "infrastructure" of embodied intelligence.

Chenjing Technology went beyond the single - function display and presented the industry with a standardized perception solution verified at the industrial level. It also grandly launched the "LooperRobotics" brand system and the full - stack technology matrix: the Insight autonomous spatial intelligent camera, the TinyNav neural navigation algorithm library, and the RoboSpatial spatial perception software platform.

As the integration of embodied intelligence enters the deep - water zone, the upper limit of perception ability will determine the ceiling of robots. Chenjing Technology attempts to break through the boundaries of spatial intelligence technology and make robot perception technology evolve from "customization" to "generalization". At this moment, the "eyes" and "brains" of robots are no longer simple components but are defined as the core foundation that empowers all industries and drives the transformation of the physical world.

I. Hardware Deconstruction: The Counter - Intuitive "Engineering Aesthetics"

In the traditional design of cameras and sensors, there is a common convention: horizontal priority, that is, the cameras should be arranged horizontally. This sounds reasonable because human eyes are arranged horizontally. We are used to paying attention to the broad horizon, and autonomous driving cars also need a horizontal field of view to monitor the wide roads.

However, the Insight autonomous spatial intelligent camera launched by Chenjing Technology this time is a real "maverick". The R & D team broke the industry's established rule and counter - intuitively placed three cameras "vertically", pulling the maximum field of view (field of view angle) of the lenses to an astonishing 188°.

This design is not for the sake of being different but stems from a profound understanding of the pain points in the implementation of embodied intelligence. Yan Qinrui, co - founder and COO of Chenjing Technology, told a real story: After in - depth communication with partners, they found that when those star - level robotic dogs or humanoid robots participated in large - scale exhibitions like WAIC, once they were surrounded by enthusiastic audiences, the robots would "go blind". Why? Because when looking through traditional horizontal cameras, all they could see were shaking human legs and moving crowds.

For the robot perception system, this is a very big test. It's like a person being blindfolded and thrown into a bustling downtown area, unable to find a fixed reference (such as a road sign or a building). The robot would instantly fall into an "information island", suffer from "claustrophobia", not know where it was, and might even get lost and bump into people.

Chenjing Technology's solution is very ingenious. They borrowed the engineering secret of drones: Since it can't see around, then look up and down. Through the vertically placed ultra - wide - angle lens with a 188° field of view, even if the robot is surrounded by a crowd like a barrel, the Insight camera can still see the ceiling through the gaps between people's heads and scan the floor under its feet. The texture of the ceiling and the structure of the floor are relatively fixed. This allows the robot to still stably know its position in a chaotic crowd.

Another significant breakthrough of the Insight camera is its ability to withstand a severe vibration of 24g. Currently, commonly used robot camera devices can usually only measure accelerations within 8g. For wheeled robots or vacuum cleaners moving slowly on flat ground, this is indeed sufficient. However, during the development process on the Unitree robotic dog, the team found a problem: When legged robots perform high - dynamic movements (such as parkour, jumping, and getting up after falling), the up - and - down impact force generated instantaneously can easily exceed 8g. Once the vibration exceeds the limit, the sensors will be "dazed", and the robot will immediately become "confused" because it can't sense its own posture.

Therefore, the Insight camera has directly increased its anti - vibration ability to 24g. This additional measurement range is to enable future humanoid robots to truly undertake dirty and tiring jobs, to experience hardships. It aims to transform the robot's eyes from "delicate flowers in a greenhouse" into "industrial organs".

II. Moving Computing Power Inward: Packing "Black Technology" into the Edge Side

Regarding the issue of how to process data, the R & D team of the Insight camera made a bold decision: to put the "brain" into the "eyes". On a circuit board that is only 30mm wide and smaller than a cookie, they stuffed a Digua robot intelligent computing chip with a powerful AI computing power of 10 TOPS. Why install a "mini - computer" with super - strong computing power in the camera?

Firstly, it is to cure the "white - wall phobia" of traditional machine vision. The principle of traditional binocular vision is similar to how human eyes see things. It relies on the image differences seen by the left and right eyes to calculate distances. This is highly dependent on the texture of the object's surface. Once faced with a white wall, transparent glass, or a dark environment, the camera can't detect differences and will be unable to calculate distances, resulting in data gaps.

The Insight camera uses neural network deep - learning computing. It not only uses the "eyes" to see but also uses the "brain" to supplement. With the powerful computing power of the chip, the algorithm can learn a large amount of environmental knowledge. Even when facing a white wall, it can calculate the precise distance. However, this requires a large amount of computing power support, which is difficult to fully borne by the robot's main unit.

The deeper significance is to install a "reflex arc" for the robot. The amount of data involved in the operation of a robot is extremely large. If all calculations are carried out on the main unit, it will not only occupy the resources for the robot's logical thinking but may also cause transmission delays. When the transmission is delayed by dozens of milliseconds, a running robot may lose balance or collide because it can't react in time.

Therefore, the R & D team of Chenjing Technology enables the Insight camera to complete complex calculations at the edge side and directly tell the main unit "where I am and how far it is in front". This enables the "perception - action" closed - loop to be completed within milliseconds, ensuring that the robot is agile and can accurately reach its target.

In addition, Chenjing Technology also attempts to initiate an "equal - rights movement" for VSLAM technology. For a long time, VSLAM (Visual Simultaneous Localization and Mapping) technology has been like a "black magic" in the technology circle, only mastered by giants. For small and medium - sized robot companies, self - developing VSLAM not only requires writing code but also solving the calibration of camera lenses, microsecond - level errors in time synchronization, and even the thermal expansion and contraction of lenses caused by chip heating. Building a full - stack team for this usually takes 2 - 3 years and tens of millions of dollars in investment. If the final result fails to meet the standards, they can only return to the lidar solution.

Chenjing Technology has transformed this complex system, which has undergone strict verification, into a "plug - and - play" standard component. This allows developers to stop reinventing the wheel and focus their R & D efforts on the application scenarios of robots rather than how the robots walk.

III. Cognitive Evolution: The "Biological Instinct" of TinyNav

The Insight camera solves the problem of robots "seeing", while the TinyNav neural navigation algorithm library aims to solve the problem of "finding the way". This is not only a technological upgrade but also enables robots to evolve from "measuring dimensions" to "understanding the environment".

"The goal of TinyNav is to create a sincere open - source high - performance robot navigation library," Yang Zhenfei, the initiator of the TinyNav project, introduced to us. Traditional robot navigation technology is essentially "geometric measurement". However, in a long corridor or when the furniture in a room is moved, simply relying on measurement can easily lead to cumulative errors, and the robot will "get lost". To solve corner cases, developers often introduce a large number of manual rule - based codes to solve problems in different scenarios one by one. For example, the currently most commonly used ros nav2 navigation library is composed of a total of 140,000 lines of code.

TinyNav chose a bionics path. The R & D team borrowed a Nobel - Prize - level discovery: the "grid cells" and "place cells" in the biological brain. Humans don't rely on counting steps to recognize the way but on feelings and features - there is a red sofa beside, a TV in front, and this is the living room - to identify the environment.

To achieve this goal, TinyNav will fully explore the latest achievements of generative world models. It's like giving the robot an "Inception". Just by looking at a little real - world scene data, it can generate countless high - fidelity virtual scenes on its own. This "ability to draw inferences from one instance" can greatly improve the generalization ability of the navigation system. For this reason, TinyNav limits the target number of lines of its core code to 3,000.

Yan Qinrui explained: "Our team members have personally experienced the process of spending a lot of manpower and time writing rules and adjusting parameters to adapt to different scenarios during the large - scale mass production of two types of robotic products: autonomous driving cars and drones. With the development of large - scale AI models in the past two years, we believe that the architecture of TinyNav, which emphasizes data over rules, is a native AI solution. It can not only achieve efficient software - hardware collaborative optimization with the Insight camera with powerful built - in AI computing power but also adapt to more open and general scenarios and continuously evolve as data and computing power increase in the future."

At this new product launch event, Zuo Xingxing, the chief robot scientist of Chenjing Technology, made his first public appearance and introduced the company's cutting - edge technological research in multi - modal perception, scene understanding, visual - language navigation (VLN), visual - language action (VLA), humanoid robot motion generation, and dexterous two - arm manipulation. As an expert in the field of robot perception, Zuo Xingxing is a rising star in the research of embodied intelligence. He is currently an assistant professor in the Robotics Department of the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in the UAE and leads the Robotics Cognition and Learning Laboratory.

He has previously served as a post - doctoral researcher at the California Institute of Technology (Caltech), a research scientist at Google, and a visiting scholar at the Swiss Federal Institute of Technology in Zurich (ETH). His research interests include mobile robot perception, 3D computer vision, embodied intelligence, and multi - sensor fusion. Due to his outstanding contributions and influence in the field of robot perception, Zuo Xingxing has been invited to serve as an associate editor of the top - tier journals T - RO and RA - L in the robotics field and as an editor of top - tier robotics conferences such as RSS, IROS, and ICRA.

His strong addition will provide solid support for the development of Chenjing Technology in the fields of robotics, spatial intelligence, and embodied intelligence.

To make this system more user - friendly, Chenjing Technology has also developed the RoboSpatial spatial editing platform. It's like providing robot developers with a "building - block game". Developers only need to mark on a visual interface, such as "this is a no - go area", "that is a deceleration area", and "this is a charging pile", to achieve path planning. This enables operators who don't understand advanced algorithms to deploy a complete set of robot operation processes for a large port or industrial park within a few hours, greatly reducing the threshold for the implementation of embodied intelligence.

IV. Becoming the "Infrastructure Giant" of Embodied Intelligence

From a spatial computing company known for algorithms and software to launching hardcore sensor hardware, this step of Chenjing Technology seems sudden but is actually inevitable.

"Why does an algorithm company need to make hardware? Because spatial intelligence technology has never been a pure software algorithm but a system engineering of in - depth software - hardware collaboration," Yan Qinrui said straightforwardly. "A good SLAM engineer must be a full - stack talent. They not only need to understand algorithms but also the optical characteristics of camera modules, the deformation effects of structural strength, and the underlying logic of embedded drivers. Software - hardware integration is the only solution to break through the obstacles in technology implementation and ensure industrial - level stability."

At this new product launch event, Hu Wen, co - founder and CEO of Chenjing Technology, proposed a new concept of "spatial intelligence as a service". He believes that embodied intelligence is forming a "super - consensus" globally, and its market scale will be comparable to that of the automotive and mobile phone industries, reaching trillions of dollars. In this huge industrial chain, Chenjing Technology's role is not to build robots but to be the "infrastructure giant" of embodied intelligence and strive to become a first - tier supplier in the robot era.

"If cars are mobile infrastructure and mobile phones are information infrastructure, then robots will be the physical - task infrastructure of the future," Hu Wen said. Whether it is the "labor - like" supplement on the B - side or the public service on the G - side, a unified, standard, and highly available perception base is needed.

When LooperRobotics equips robots with industrial - level "eyes", biological - level "brains", and standardized "vestibular systems", embodied intelligence can truly step out of the laboratory's greenhouse and have the confidence to enter all industries and face the challenges of the real physical world.