HomeArticle

A doctoral team from Tianjin University has secured tens of millions in financing and brought the cost of "robot eyes" down to the thousand-yuan level | Exclusive from 36Kr

张卓倩2025-08-04 15:37
Omnidirectional 3D stereo vision perception technology with multi - camera fusion and hardware acceleration.

Text | Zhang Zhuoqian

Editor | Peng Xiaoqiu

36Kr learned that Tianjin Huanshi Intelligent Technology Co., Ltd. (hereinafter referred to as "Huanshi Intelligent"), which focuses on the omnidirectional 3D visual perception technology of robots, has recently completed an angel - round financing of tens of millions of RMB. The lead investor is Tianrun Jiacheng, followed by Chengdu Gaotou, and Pareto Forest serves as the exclusive financial advisor for the financing. The funds from this financing will be used for product mass production, development of industry - supporting solutions, and construction of the market and production systems.

Founded in April 2024, "Huanshi Intelligent" is a technology company dedicated to researching robot perception, planning, and control technologies. The standard modules with omnidirectional 3D stereo visual perception functions produced by the company have been applied in various scenarios such as drones, unmanned vehicles, legged robots, and humanoid robots. In addition, "Huanshi Intelligent" can also provide customized solutions for mass - produced robots.

Huanshi Intelligent Space Perception Module

With the development of the robot industry, the demand for perception in robots has shifted from low - level to high - level, and upgraded from 3D vision to spatial intelligence, requiring environmental modeling, semantic understanding, situational understanding, and task planning. Sun Hang, the founder of "Huanshi Intelligent", said, "Two years ago, OEMs focused on the number of lidars. Now, more and more manufacturers are turning to the pure - vision route." Due to the progress of AI technology, pure vision can match or even surpass lidar in 99% of scenarios. Moreover, the technologies in the robot and autonomous driving industries are interlinked, and the visual technology in autonomous driving can be applied to the robot industry.

However, the biggest bottleneck for robots to move from special - purpose to general - purpose lies precisely in perception. Lidar outputs in the form of point clouds, with low information density, unable to distinguish colors and materials, and the cost remains above 3,000 yuan; the field of view of depth cameras is less than 90°, requiring a back - end computing power of hundreds of TOPS, and the BOM of the whole machine can easily exceed 10,000 yuan; although the pure - vision solution can reduce costs, it is limited by computing power, power consumption, and a large amount of labeled data, making it difficult to be implemented within the 10% cost limit of the whole machine. Multi - sensor fusion brings problems such as noise superposition, complex calibration, and uncontrollable after - sales costs. The industry urgently needs a general - purpose visual module that "sees comprehensively, calculates quickly, and is sold cheaply".

The answer provided by "Huanshi Intelligent" is a standard module with a diameter of 10 cm: Four 200° wide - angle cameras are fused to create an 800° omnidirectional field of view, and any direction has the ability of dense spatial measurement, completely eliminating the "dead - lock" blind areas in texture - less scenarios such as white walls and glass. The module is equipped with a self - developed hardware 3D spatial computing engine, which decouples 3D geometric operations from AI inference. It only needs 10 - 30 T to complete global 3D information, semantic understanding, and intention prediction at the edge, with a delay of <30 ms and power consumption of <5 W.

"In the past, when robots encountered a white wall or other low - texture situations, they were particularly prone to losing navigation and perception information because there were no textures for feature reference," Sun Hang told 36Kr. "The 800° omnidirectional field of view ensures that there is complete redundant information in any direction. Even if the front is all white walls, there is still information input from the back, allowing the robot to know where it is'stuck'."

In terms of the technical path, "Huanshi Intelligent" proposes an omnidirectional 3D perception architecture of "multi - camera fusion + hardware acceleration" and a self - developed 2D - 3D data revival technology, which can generate 3D training sets from the customer's historical 2D images without annotation, saving 80% of data costs; the unsupervised spatial learning framework developed by the company enables robots to continuously evolve during operation, forming a sustainable iterative general - purpose world model. Compared with solutions such as Tesla FSD and Intel RealSense, "Huanshi Intelligent" achieves a generational gap in terms of field of view, computing power requirements, and cost simultaneously: it can be adapted to many scenarios at low cost, meeting the requirements of the robot industry for low cost and low computing power.

Since its establishment one year ago, "Huanshi Intelligent" has achieved millions of RMB in revenue, mainly targeting scenarios such as drones, garden robots, cleaning robots, and security robots. There have also been small - batch orders from universities for robot projects. Next, the company will further promote product mass production and expand its applications in scenarios such as drones, cleaning robots, and university scientific research. Sun Hang revealed that "Huanshi Intelligent" will also upgrade its products to a deeper - level end - to - end orientation.

The core members of "Huanshi Intelligent" come from a doctoral team of Tianjin University. The team has long - term research and marketization experience in the field of robots, and the technical team has both experience in technology R & D, industrialization, and commercialization. Founder Sun Hang studied under Professor Qi Juntong during his school days. After graduation, he served as the technical director and chief architect of start - up enterprises and large military industrial units, participating in the design and R & D of technologies such as environmental perception, intelligent control, and swarm algorithms for drones and robots, which are applied to drone swarm products and intelligent robot systems.