HomeArticle

From Tsinghua University and Made by Tsinghua - A pair of robot eyes has secured funding.

36氪的朋友们2025-11-04 11:30
A financing wave rarely seen in recent years.

This round of financing was led by Linge Venture Capital, followed by industry investor Hengdian Capital. The old shareholder Yuanqiao Capital increased its investment beyond the quota, and Shenlan Capital served as the long - term exclusive financial advisor.

If we were to review the primary market in 2025, "embodied intelligence" would surely be the hottest investment theme in the minds of many.

A few months ago, in the article "Even 'Aunt Xu' Is Investing in Embodied Intelligence", I reviewed the overall financing situation in the first seven months of 2025. Statistics showed that there were 123 financing events in the embodied intelligence track in the first seven months of 2025, and 46 financing events with a scale of over 100 million yuan. From July 7th to July 13th that week, six start - up companies in the embodied intelligence field officially announced financings of over 100 million yuan. At that time, my conclusion was that in terms of scale, frequency, and quantity, it could be regarded as a rare financing boom in recent years.

Several months have passed, and the overall trend still shows no obvious signs of cooling down. According to statistics from CVSource of Touzhong Jiachuan, in the first ten months of 2025, there were 195 new financing events in the embodied intelligence track, and 69 financing events with a scale of over 100 million yuan, showing a significant increase compared with the data of the first seven months. It can be said that the description of "urgent, hot, numerous, and fast" still applies.

However, the continuous popularity does not mean that the thinking of entrepreneurs and investors has not changed. The financing boom that emerged in the first seven months of 2025, especially after the Yizhuang Marathon in April, is usually understood as a continuation of the "large - model investment boom". The basic logic is that after entering 2025, "the model capabilities have been greatly improved", and robots are an application that best reflects the "substantial improvement in model capabilities", and "the speed and acceleration of the improvement exceed market expectations". Therefore, although embodied intelligence still lacks "specific quantitative indicators for valuation and pricing" and its main application scenarios are still concentrated in scientific research and demonstration, it is still "a foreseeable certainty" compared with other tracks and is worth "early layout".

As the popularity continues to rise and the financing rhythm becomes even more intensive, a new situation has emerged: the first - tier enterprises have accumulated sufficient advantages in terms of capital, team building, technology accumulation, and supply - chain construction to form barriers. Under this premise, it is very difficult for the "ontology", which has higher requirements for data sets, computing power consumption, technology reserves, and scenario expansion, to have significant breakthrough opportunities. The investment focus will inevitably shift "upstream" and concentrate on key components such as robot joints and flexible materials. For example, Zhiyuan Robotics, a leading enterprise in embodied intelligence, revealed in August that it had launched a CVC business targeting early - stage projects, with one of the main lines being upstream enterprises represented by Qianjue Robotics, Linghou Robotics, and Fuxing Motor.

Today, another potential company in the upstream of the robot industry chain has completed a new round of financing. According to Touzhongwang, Saigan Intelligence, which focuses on the "robot perception" field, recently completed a Pre - A round of financing. This round of financing was led by Linge Venture Capital, followed by industry investor Hengdian Capital. The old shareholder Yuanqiao Capital increased its investment beyond the quota, and Shenlan Capital served as the long - term exclusive financial advisor. This round of financing will be used for core technology iteration, strategic expansion of the product line, and large - scale commercialization, so as to further consolidate the company's technological advantages in the field of multi - modal spatial sensors and promote the in - depth application of products in robot scenarios.

"Robot Eyes", Made by Tsinghua

For humans, the ability to eat, drink, and do housework smoothly largely depends on visual assistance, which helps us locate where the rice bowl is, how far it is from us, and how long it will take to reach our mouths. The same logic also "applies" to robots. In order to enable robots to smoothly enter human production and living scenarios, researchers have been iterating the "eyes" of robots almost day in and day out for decades, trying to make robots perceive our physical world more efficiently, specifically, and accurately.

From the existing products, the current mainstream technical solutions can be divided into two types. One is "multi - sensor fusion", which enables robots to obtain various data through the combination of RGB cameras, lidars, and inertial measurement units, and then improves the accuracy of the system through data fusion. The other is to make robots learn to "actively look". Robots actively move the camera to observe objects or the environment from different angles (such as when passing obstacles) to obtain more complete and clear information.

However, these two technical solutions also have obvious drawbacks. For the former, "multi - sensor fusion" highly depends on the mechanical independent stacking of individual sensor systems, consuming a large amount of labor, code resources, and computing power to achieve basic behavioral decision - making for robots. The latter is the same. If robots are to stop processing a single static image stupidly, decisions must be made within milliseconds to seconds, and more complex motion planning algorithms need to be completed, which also corresponds to a huge amount of computation.

This is exactly the entrepreneurial opportunity that Saigan Intelligence has identified.

Saigan Intelligence was founded in 2024, and its entrepreneurial team has a strong "Tsinghua" background: the founder and CEO, Fu Chen, graduated from Tsinghua University with a bachelor's degree in Measurement and Control Technology and Instruments and obtained a doctorate in Optical Engineering from Tsinghua University. Before founding Saigan Intelligence, he participated in the joint entrepreneurship of Furi Optics (a leading enterprise with a 30% market share in the 2D lidar market in the robot field) and was responsible for the R & D of lidar projects. After becoming the CTO, Fu Chen led the establishment of the product department, was responsible for the overall business process and product planning, R & D, and control of Furi Optics, and established a complete set of rapid response mechanisms for products and technical services based on customer needs.

This experience inspired Fu Chen that in solving the problem of "robot perception", the technical path of "multi - modal fusion + perception - decision integration" is needed. That is, on the basis of also seeking "multi - sensor fusion" (multi - source data such as 3D lidars and ring cameras), a localized computing power platform and streamlined VLA and algorithms are paired to build a "central nervous system" for robots, directly completing target feature generation, environmental perception, and basic behavioral decision - making.

Meanwhile, in order to optimize the system performance, the team also needs to self - develop a new lidar technology architecture, which can generate a large - field - of - view and high - density point cloud at an extremely low cost, providing a solid data foundation for accurate robot perception. At the same time, it can meet multiple requirements such as obstacle avoidance, navigation, and target recognition, achieve high - precision positioning, obstacle recognition, and path planning in dynamic environments, and improve the decision - making response speed to the millisecond level, thereby relieving the burden on the robot's "brain" and enabling it to quickly respond to diverse and complex scenarios.

It can be said that if this idea can be successfully implemented, Saigan Intelligence will have the opportunity to create a "full - scenario perception ecological solution" that is completely different from the "customized perception solution" and more in line with the current need of the embodied intelligence track to enter more life - service scenarios. According to Touzhongwang, the successful completion of this new round of financing marks the smooth progress of this technical route:

In terms of full - scenario perception coverage, Saigan Intelligence's self - developed Octa series of products provide accurate perception capabilities for various industrial robots, pointing to a future "full - scenario, long - term sustainable" perception solution.

(Product schematic diagram, source: Saigan Intelligence)

In the closed - loop layer of perception - decision - execution, Saigan Intelligence's newly launched multi - modal sensing central architecture can achieve high - density point cloud coverage, end - to - end visual capabilities, and target structured information generation capabilities with an ultra - compact structure, enabling mobile robots to achieve seamless "perception - decision - execution" connection, combining high performance with low cost. Fu Chen said: "This is just the starting point for scenario implementation. With the iteration of multi - modal fusion technology, we will continue to launch solutions suitable for more industries."

A Vote of Confidence from the "Tsinghua Group + Large Industrial Parties"

The new investors in this round also have a distinct "Tsinghua" background. The leading investor, Linge Venture Capital, was founded in 2021 and is an investment institution focusing on the early - stage. It uses "top - tier universities and research institutes" as its fulcrum and focuses on system innovation and upgrading opportunities in the fields of new energy, new materials, and advanced manufacturing.

The same investment logic is also reflected in another participant, Hengdian Capital. Hengdian Capital is an industrial investment platform under the Hengdian Group. Relying on the deep industrial background of the Hengdian Group in the manufacturing industry, it has invested in a large number of projects in the advanced manufacturing field in recent years - for example, Jiangsu Maizheng Intelligent Equipment and Juli Automation in the field of industrial automation, and has also made considerable layouts in the field of smart healthcare - for example, Agile Medical, a surgical robot developer, and Zhongtian Medical, a developer of pan - vascular interventional devices. All these existing layouts have shown a strong demand for "robot vision" and "robot perception".

In addition, there is an interesting fact: Yuanqiao Capital, an old shareholder inherited from Supor's industrial capital, has also been one of the low - key but active investors in the "embodied intelligence" field in recent years and especially favors "upstream robot components". In early 2024, it invested in Seer Robotics, which ranked first in the global sales of robot controllers for two consecutive years and is currently preparing for listing on the Hong Kong Stock Exchange. Yuanqiao Capital's official account once clearly mentioned in a research report:

"There are great opportunities for enterprises in robot core components. Not only is there a 4 - 20 - fold growth potential in the future terminal market, but more importantly, there is a need for technological innovation and cost reduction. Assuming that Musk's product expectations are to be met, the overall cost of robots needs to be reduced by 60 - 80%... There is a lot of innovation in technological routes, which in turn creates opportunities for the growth of many technology - innovative enterprises."

In short, the participation of Linge Venture Capital and Hengdian Capital in this round well confirms the consensus mentioned at the beginning that "the overall embodied intelligence is moving upstream", and at the same time, it also confirms the necessity, forward - looking nature, and urgency of Saigan Intelligence's technical solution. The continued participation of the old shareholder, Yuanqiao Capital, which is also a large industrial party, to some extent confirms that the progress of the technical route chosen by Saigan Intelligence is in line with or even exceeds expectations.

According to Touzhongwang, in addition to the self - developed lidar OCTA that has been announced, Saigan Intelligence has also achieved breakthroughs in several core technologies, significantly improving the integration efficiency and performance of machine - vision solutions. The productization process is advancing rapidly, and it has entered the small - batch delivery stage for leading customers. With the completion of this round of financing and the continuous launch of new products, Saigan Intelligence will continue to promote the evolution of multi - modal perception technology, enabling robots to break free from a single field, breaking the barriers between industrial and civilian robot perception technologies, and allowing machines to truly "see" the three - dimensional world and understand ever - changing scenarios.

This article is from the WeChat official account "Dongshisi Tiao Capital" (ID: DsstCapital), written by Pu Fan and published by 36Kr with authorization.