HomeArticle

Stop competing in the industrial robot field. Do we still need to learn to act coquettishly in the "second half" of embodied intelligence?

晓曦2026-04-01 10:52
Technology should assist humans, not replace them. What Qingxinyi Creative aims to do is to bring beloved characters like Snowy into real life in the form of embodied intelligence, offering genuine comfort during moments of loneliness and emptiness.

Text by | Jiang Liu

Edited by | Wang Xiaokun

Not long ago, at the NVIDIA GTC 2026 conference, "Olaf" from the Disney animation "Frozen" sparked heated discussions on Jensen Huang's stage.

Almost at the same time, a robot named Amoo quietly made its debut at the 2026 AWE. It has a soft, furry texture, big eyes, and a height of less than 80 centimeters, looking like an affectionate child. When someone pats its head, it will squint its eyes contentedly; when someone teases it, it will "talk back" with expressions and actions. As expected, it was immediately hailed as the "ultra - cute robot". This time, it comes from the Chinese startup Qingxin Yichuang.

After the "Four Stars of the Spring Festival Gala" competed on the same stage, Olaf and Amoo have shown us that a new narrative of embodied intelligence is unfolding:

As AI and physical simulation technologies gradually mature, "materializing" beloved images/IPs is becoming a new path for robots to enter consumer - level scenarios.

Recently, Qingxin Yichuang completed a hundred - million - yuan Pre - A round of financing, jointly led by Houxue Capital and Tianji Capital, with the old shareholder Lepu Capital continuing to participate. Taking this opportunity, we talked with the core team of this company. Through the thinking behind Amoo, we tried to find the underlying logic for its popularity among the audience and recognition by capital.

01 Not Just "Companionship", but "Character"

Currently common AI companion hardware either only has voice interaction or has limited interaction capabilities, making it difficult for people to intuitively feel that "this is a companion".

What Qingxin Yichuang wants to create is a new robot paradigm: Embodied Character Intelligence. It connects the whole chain from "story introduction" to "physical temperature", then to "interaction experience" and "long - term bond". This is fundamentally different from the companion - type AI hardware on the market.

From the perspective of the embodied intelligence track, this choice allows them to avoid direct competition with industrial robot giants. In a more acceptable form, their products can be integrated into daily life, achieving PMF (Product - Market Fit) first and breaking through the long - standing data bottleneck in embodied intelligence in real - world applications.

Regarding the question of "whether the character can be established", Qingxin Yichuang divides its product into three layers.

The first layer is the physical layer. The character has to "stand firm" first.

In the home environment, it's a well - known industry consensus that a too - large robot may bring a sense of oppression, while a too - small one may have difficulty in achieving stable movement, interaction, and body integration. Taking Amoo as an example, its height of less than 80 centimeters is not only for the sake of "cuteness" but also a reasonable scale considering engineering implementation and scenario requirements.

It has two feet not for showmanship. Bipedal design makes it easier for people to regard it as a "character" - it can approach, turn around, and pause, and it can generate a stronger sense of "human - like presence" and is more adaptable to the world built for the bipedal human form.

The plush covering is not just a superficial design. For a companion robot, a user's first reaction is often not "how powerful it is" but "whether I'm willing to get close to it, touch it, and integrate it into my life".

Qingxin Yichuang has developed a unique plush soft - covering structure made of skin - friendly materials, enabling Amoo to get rid of the cold and mechanical feeling of traditional robots as much as possible and arousing people's desire to get close to it with its softness and safety.

The second layer is the interaction layer. The character shouldn't just stand there; it should be able to move and respond.

Most desktop robots or wheeled companion devices on the market have an interaction mode that basically stays at "you say one thing, it replies one thing", plus a few preset actions.

The "cuteness" of Amoo is not only due to its cute appearance but also because it has "eyes, ears, and skin". Its ultra - wide vision, precise voice recognition ability, and tactile sensing design can clearly capture users' expressions, actions, and words, and keenly perceive various life scenarios and emotional changes. It can look at you, listen to you, and even look for you.

Qingxin Yichuang wants Amoo to take a step further - connecting the abilities of hearing, vision, touch, and multi - dimensional action expression. It's not a simple combination of "dialogue + action" but a simultaneous expression of actions, expressions, and postures. When you touch it, it not only reacts but also turns to look at you; when you talk to it, it not only answers but also uses body language to show that it understands you or that it has a little mood.

Another difference lies in "autonomy". Similar robots on the market can only receive instructions, while "initiative" is a typical characteristic of a sense of life. Qingxin Yichuang envisions endowing the robot with full autonomy, moving from "passive response" to "active decision - making", like a qualified "family member".

The third layer is the soul layer. The character should have its own personality and be able to change with the relationship, rather than remaining in the factory settings forever.

"We fall in love with a special soul."

A unique background story and personality provide a premise for the robot's self - narrative. As it gets along with the user, the long - term memory ability and the intimacy model will gradually form a unique "bond" for each user - it's a "companion", not a "tool".

Qingxin Yichuang hopes to extend this set of capabilities to more characters - whether they are self - developed or through cooperation with external IPs. For Qingxin Yichuang, the difficulty is never just "creating" a character, but further endowing the robot with mobility and a soul - which is closely related to the continuously iterative model capabilities.

02 The Hard - Won Implementation: Endowing the Character with Embodied "Mobility"

Currently, the embodied intelligence companion market is not short of good ideas and stories, but it lacks implementation.

Having accumulated experience in robotics, control, and AI doesn't mean one can create a miniaturized, consumer - level, and cute embodied character. The difficulty lies in the fact that it's not just a standard robot with a shell; everything has to be redone from the outside in. And there are no ready - made answers to each specific implementation problem.

The hardware problems to be solved are interrelated: Is there a suitable joint? How to deal with motor heat dissipation? What materials have elasticity without causing too much resistance? Will the fabric jam the joints? - Changing one aspect affects everything else.

To make the robot smaller and fit the character image, the joint modules need to meet the requirements of "small size, light weight, high torque, low noise, and good heat dissipation" simultaneously. There are no ready - made products on the market. The team started from scratch and made breakthroughs in aspects such as structural design, material selection, reducers, and electronic control systems, finally forming a self - developed library of small joint modules.

The seemingly simple plush covering also poses many challenges. Initially, the team tried to put the robot into an existing plush toy, but after it moved, the outer plush and the inner cotton couldn't stick together, creating a disjointed and unconvincing look.

It took nearly three months for the team to visit more than 600 suppliers and try almost all possible directions.

Finally, they developed a unique multi - layer all - soft covering structure that can maintain the shape, ensure heat dissipation, and provide a highly elastic and skin - friendly touch. This solution allows Amoo to maintain the warm touch of plush without affecting joint movement.

At the model level, enabling the robot to have both "embodiment" and "personality" is not just about applying a large - language model.

Before Amoo, Qingxin Yichuang developed a full - size 1.45 - meter - tall robot named Orca, which was the first to achieve straight - knee walking and verified the feasibility of bipedal motion control in outdoor and complex terrains. Then, they launched the world's first large emotion - based gait model, allowing the robot to learn and choose different gaits autonomously according to its emotional state and switch seamlessly - walking briskly when happy and slowly when sad. This provides a "human - like" model premise for the robot's actions.

In Amoo, Qingxin Yichuang further applies full - stack technical capabilities such as cutting - edge embodied intelligence and the synergistic software and hardware of the large and small brains in autonomous driving to the robot architecture, creating a "Three - Mode Integrated Agentic OS" to translate decision - making outputs into precise, synchronous, and life - like body movements, eye expressions, and intonations.

The team divides the cognitive system into the "cerebellum" and the "cerebrum". The cerebellum is responsible for instinctive reactions: squinting when its head is patted, being obedient when pulled, avoiding obstacles, and maintaining balance, making the robot seem "alive" at the physical level. They also intentionally add a bit of "unpredictability" to the cerebellum - it won't give you the same feedback every time. Sometimes it will be happy when you praise it, and sometimes it may "act coquettishly" and ignore you. This uncertainty is also one of the sources of the "sense of life". The cerebrum is responsible for in - depth thinking: understanding emotions and intentions, making decisions based on long - term memory and personality traits, and giving its own responses.

As a long - term foundation for technological evolution, it will also conduct self - research on the world's first "social world model" based on data.

Compared with the industry's focus on the generalization ability of the world model, the social world model pays more attention to the robot's understanding of real - world social causal relationships: It not only needs to know the literal meaning of a sentence but also understand what the same hand - lifting gesture with different tones may mean in a specific context, and then give appropriate facial expressions, gaits, or voice responses.

The multi - modal interaction data and high - quality human - machine immediate feedback models accumulated in real - world family scenarios will help better train the social world model, making it more refined and less replicable in long - term use.

03 Both Problem - Solving Ability and Taste are Indispensable

To some extent, the team composition of Qingxin Yichuang explains why this company has reached this stage.

The founder, Niu Tengyu, obtained a doctorate in artificial intelligence control from the Department of Engineering at the University of Cambridge at the age of 26. As a post - 90s generation, he participated in autonomous driving R & D at Toyota and Huawei, creating the world's first autonomous driving decision - making and planning network based on human - machine interaction. His cross - disciplinary background in "autonomous driving + neuroscience + brain science" determines that from the very beginning, Qingxin Yichuang doesn't regard companion robots as a simple hardware project but as a systematic project related to perception, cognition, behavior, and emotional feedback.

CTO Zeng Jun, a doctor from the University of California, Berkeley, is a former senior AI scientist at the top US autonomous driving company Cruise and a pioneer in the field of safety - critical control. His joining is a great boost. He is one of the few experts in the world who has in - depth knowledge of the full - stack technology implementation of L2/L3/L4 levels and is deeply involved in the pre - research and development of cutting - edge autonomous driving large models. His profound understanding of "safety" and "boundaries" and his ability to balance cutting - edge algorithms and engineering implementation are exactly the qualities that companion robots need most.

In the business operation layer, the team has attracted marketing and operation talents with experience in leading the intelligent companion system design at NIO, operating content IP commercialization at TikTok, and having a background in psychology. Several top entrepreneurs in the industry also serve as consultants for the shareholder team.

The team of less than 50 people covers the entire chain from algorithms, hardware, and supply chain to brand marketing and psychological research, being diverse but not redundant.

However, technology is just the foundation. As a consumer product for the general public, understanding users and having a unique product "taste" are equally crucial.

Niu Tengyu's personal experience has given him different accumulations in this regard. With experience in McLaren and Aston Martin projects, he is obsessed with studying the details of human - machine interaction in F1 cars. When studying in the UK, he often visited art galleries, having a strong interest and business insights in art and luxury design. He believes that robots are not only a technological issue but also a matter of taste - especially when they are going to become a part of the family.

This pursuit of "taste" also explains their extreme pursuit of details and responsibility for results in the implementation process.

At the AWE site, there was a detail that impressed the team deeply. A woman who looked like a corporate executive stopped in front of Amoo for a long time, repeatedly stroking its head to feel its furry texture and warm body temperature. Finally, she asked, "Can I make a reservation? How much is it? My children are all away from home. I want to buy one to keep me company."

This scene actually points to the growth potential of this business. Statistics show that the population aged 60 and above in China has reached 325 million. Coupled with 92 million single young people and the increasing popularity of "emotional value consumption", emotional companionship is becoming a necessity. With the intensification of social loneliness and the maturity of AI technology, the penetration rate of companion robots is expected to continue to increase.

Back to the woman's question at the AWE site - in today's era when AI technology and embodied intelligence are becoming more and more "omnipotent", what kind of experiences are people willing to pay for? The answer may not just be more accurate algorithms and faster execution speeds but a rare and warm connection.

When Amoo looks at you with its expressive eyes and responds to your emotions with body language, it offers not just a function but a possibility of "companionship". And the story of embodied character intelligence is just beginning.