StartseiteArtikel

Die unabhängige Variable hat in der Serie A+ rund 1 Milliarde Yuan an Finanzierungen erhalten. Alibaba Cloud, Guoke, Guokai, Sequoia Capital und Yingce haben beteiligt.

南山战新投2025-09-08 18:30
Das unabhängige Variable Robot hat in letzter Zeit rund 1 Milliarde Yuan an Finanzierungen in der Serie A+ abgeschlossen.

The Self-variable robot recently completed a Series A+ financing of nearly 1 billion yuan. This round was led by Alibaba Cloud and Guoke Investment, followed by Guokai Financial, Sequoia China, and Yingce Capital. The old shareholders Meituan Strategic Investment participated in the over-proportional investment, and Lenovo Star and Legend Capital continued to invest.

This is Alibaba Cloud's first investment in an Embodied AI company. Alibaba Cloud highly recognizes Self-variable's technology and will support Self-variable in various ways. The funds will be used for the continuous training of Self-variable's fully self-developed general Embodied AI basic model and the R & D of hardware products.

Since its establishment at the end of 2023, Self-variable has set a technological path to achieve general Embodied AI through an end-to-end cross-cutting large model. Recently, it introduced the fully self-developed wheeled two-armed humanoid robot Quanta X2, which is suitable for the control of multimodal large models. Self-variable's technological route of developing software and hardware simultaneously, as well as its forward-looking technological concepts and results, have also been recognized by state investment platforms, world-leading investment institutions, and industrial capital.

As the first Chinese company to realize an end-to-end Embodied AI large model, Self-variable has developed the self-developed "WALL - A" series VLA (Vision - Language - Action) large - scale operating model and built a unified framework for cognition and action. In the unified representation space, the model processes perception, inference, and action simultaneously, conducts direct trans - modal causal inferences and action decisions, so that the robot can finally think and work like a human. Currently, the "WALL - A" model has shown zero - sample generalization ability in some new task types for which it was not trained at all.

At the same time, the company has realized an end-to-end Embodied Thinking Chain inference framework for the first time. Based on multimodal inputs, it conducts in - depth inferences and generates multimodal outputs, thus forming a complete closed - loop of the model's autonomous decision - making, execution, exploration, and reflection. The model can closely connect language understanding, visual perception, and action execution to form an inference process closer to human thinking. It has successfully overcome the bottlenecks in long - term multi - step tasks, significantly improved the task completion rate, and greatly expanded the robot's ability to handle complex real - world scenarios.

In the middle of this year, the company used an Embodied AI large model for the first time to control a highly flexible hand for complex operations. Previously, Self-variable released a video in which a self-developed large model controls a highly flexible hand to finely grasp and distribute elastic and deformable objects such as playing cards.

Currently, to promote the research and application of Embodied AI large models, Self-variable has made its Embodied AI basic model "Wall - OSS" open - source for developers and released the associated training codes, so that developers around the world can quickly adapt and apply it on their own devices.

Wall - OSS has strong generalization and inference capabilities. It performs better than other basic models in long - term operating tasks. At the same time, as a multimodal basis, the model also has good causal inference, spatial understanding, and reflection capabilities.

Wall - OSS is an open - source Embodied AI basic model trained on a large amount of real data. Regarding the model architecture

an innovative "Shared Attention + Expert Routing (FFN)" architecture was developed, which transfers the knowledge of the VLM losslessly to the operating model and enables a deep coupling of language and action. In the training method, a new three - stage training paradigm of "first discrete, then continuous, then combined" was introduced to ensure that the cognitive properties of the VLM are stably and losslessly transferred and extended to physical actions. In addition, the unified inter - layer thinking chain enables arbitrary forward mapping at the abstract inter - layer level. The model can seamlessly switch between the high - level decision - making and low - level execution within a single differentiable framework.

In the hardware field, Self-variable introduced the fully self-developed wheeled two-armed humanoid robot Quanta X2 in August this year. Within less than six months, the company achieved the complete self - development of the robot body, the highly flexible hand, and the exoskeleton remote - control data acquisition device.

Quanta X2 is a universal robot developed from the ground up for the model. In the design, not only the requirements for model training and complex operating tasks were considered, but also core parameters such as load - bearing capacity, working space, movement speed, and control precision were comprehensively balanced and optimized.

Quanta X2's five - fingered flexible hand is designed in a bionic structure. It has 20 degrees of freedom per hand and can perceive fine pressure changes. At the same time, based on the arm - hand integration exoskeleton technology, Self-variable developed an industry - leading "humanoid robot arm + highly flexible hand" integration remote - control concept. Quanta X2 can not only collect high - quality data to improve model training but also be deeply integrated with the self - developed model to be actually used in real scenarios.

With the improvement of the integrated capabilities of software and hardware, Self-variable's robot has already established partnerships with leading service and industrial customers and will be used in various scenarios. In the future, Self-variable will also build an open ecosystem around the model and hardware with customers to promote the further development of Embodied AI.

This article is from the WeChat account "Self-variable Robot". Author: Towards General Embodied AI. Published by 36Kr with permission.