From the Uncanny Valley to the Trust Valley: The Hidden Costs of Humanoid Robot Commercialization
Author: Tan Yinliang, a professor of Decision Sciences and Management Information Systems at China Europe International Business School
On February 2, 2026, Tesla announced that the third - generation Optimus was about to make its debut and set a mass - production target of "one million units per year." Elon Musk further confirmed that large - scale manufacturing would be promoted with Texas as the core, and he also bluntly stated that China had become the most significant competitor.
Such statements can easily fuel market imagination: After smartphones and new - energy vehicles, will humanoid robots become the next "trillion - level terminal"?
However, if we shift our perspective slightly from engineering parameters, the problem may become more complicated: Even if robots are cheap enough, stable enough, and smart enough, are humans willing to let an entity that "resembles a human but is not one" enter the most private living spaces such as bedrooms, children's rooms, bathrooms, and kitchens?
Investors paint a beautiful vision. In the future, these human - like machines will leave factories and enter thousands of households. They will be nannies, caregivers, companions for the elderly, and members of the family.
Nevertheless, amidst the bustling capital feast, an alarm bell rooted in human biological instincts is quietly ringing. While we cheer for machines' ability to walk and work like humans, few people calm down to think: Are we really brave enough to let an object that "looks human but isn't" stand beside our beds and stare at us while we sleep at night?
What may hinder humanoid robots from entering families is perhaps not battery life, joint torque, or the reasoning ability of large models, but a defense line engraved in the genes of Homo sapiens for millions of years—the uncanny valley effect.
The uncanny valley is not a mystery: It directly affects trust and purchase
In 1970, Japanese robotics scholar Masahiro Mori proposed the "uncanny valley" hypothesis: As an object becomes more and more human - like, human favorability first rises. However, once it enters the range of "being very human - like but not quite," emotions suddenly drop, turning into discomfort, rejection, or even fear. It is only when it is almost indistinguishable from a real human that emotions may recover.
The curve model of this theory may seem simple, but the underlying real - world logic is extremely harsh: When an object is very different from the human form, such as an industrial robotic arm, people usually have a neutral or even curious attitude towards it. When it has certain human characteristics, such as toy robots or Q - version figurines, people's favorability towards it will increase accordingly.
However, when the degree of anthropomorphism exceeds a certain critical point and reaches the level of "being very human - like but not perfect," the situation changes dramatically. Stiff facial expressions, delayed eye movements, and unnatural skin tones—these minor flaws can cause favorability to plummet instantly, turning into intense disgust, fear, and rejection. Only when the similarity to a real human reaches 100% will this favorability recover.
This psychological gap between "being extremely human - like" and "being a real human" is the famous "uncanny valley."
This theory has been repeatedly verified in the CGI (computer - generated imagery) modeling of the movie "The Polar Express" and the early version of "Alita: Battle Angel." What the audience saw were not cute characters but rather eerie, zombie - like images.
It is actually one - sided to simply attribute it to an aesthetic issue. With in - depth research in physiology and evolutionary psychology, we have found that the "uncanny valley" is not just a psychological effect but a survival firewall that humans, as a biological species, have evolved through millions of years of brutal competition.
The rejection reaction in our genes
The academic community does not have a single consensus on the cause of the uncanny valley, but there are several explanatory paths that can help us understand why this kind of reaction is so "strong" and so "difficult to eliminate through persuasion."
First, it is the human instinctive avoidance mechanism for death and pathogens. In ancient times, if a human companion had a pale face, stiff movements, dull eyes, and cold skin, it meant that he might already be a corpse or infected with a virulent infectious disease such as rabies or leprosy.
For primitive humans, approaching such a "human - like individual" meant death. Therefore, natural selection engraved a strict command in our genes—whenever we see an object that "resembles a human but has abnormal physiological characteristics," we must immediately feel intense disgust and fear, prompting humans to stay away.
Today's humanoid robots, no matter how precise their motor control is, their stiff micro - expressions and mechanical gaits unconsciously trigger the alarm system in people's minds for "corpses" and "seriously ill patients." This is a hygiene and epidemic - prevention mechanism engraved in our DNA.
Second, it is the bloody memory of "different human species." Yuval Noah Harari mentioned in "Sapiens: A Brief History of Humankind" that Homo sapiens was not the only human species on Earth. We once coexisted with Neanderthals, Denisovans, etc., but in the end, only Homo sapiens survived.
Some views in evolutionary psychology hold that Homo sapiens' rejection of others is deeply ingrained. In the history of evolution, those "different species" that looked like us but were not us were often the most direct competitors for resources.
Humans are extremely sensitive to minor differences in "non - our - kind." This sensitivity is like "acrophobia" to prevent falling and "ophidiophobia" to prevent poisoning. It is a defense mechanism against competitors. When a silicon - based organism looks too human but emits a non - human aura, it awakens the hostility and defense in the depths of Homo sapiens' genes towards "Neanderthals."
Finally, this fear also comes from the survival anxiety caused by cognitive dissonance. The human brain is an organ based on prediction. When we see a vacuum cleaner, we predict that it will vacuum. When we see a person, we predict that he has emotions, pain, and empathy.
However, humanoid robots break this prediction mechanism. It has a human appearance but may act against human logic, such as rotating its head 180 degrees or standing up without feeling pain after falling. This "violation of expectations" will cause serious cognitive dissonance in the brain, which then turns into fear. Therefore, this is not just a matter of "ugliness" but an instinctive stress reaction of organisms when facing unknown threats. The real root of the uncanny valley is humans' instinctive aversion to "uncertain threats."
Why is the family scenario more challenging: Because the "error tolerance rate" here is close to zero
After understanding the underlying biological logic, when we look at the commercial implementation of humanoid robots, we will find a huge mismatch. The so - called "family nanny robots" currently being discussed in the capital market are essentially challenging human instincts.
The most insurmountable is the infinitely high trust cost. In industrial scenarios, workers don't need to "like" a welding robot; they only need it to be efficient. However, in family scenarios, especially when it comes to taking care of the elderly and children, "trust" is the core. Imagine an elderly person with Alzheimer's disease waking up in the middle of the night and seeing a "person" with a stiff facial expression and blue - glowing eyes standing beside the bed, trying to help him go to the toilet. The old man's reaction is likely not gratitude but panic with a soaring heart rate. This physiological rejection is difficult to eliminate with any sophisticated algorithms.
In addition, the delicate control of social interaction also hides many deep - seated challenges that are difficult to avoid. Family services are not just about functional tasks like serving tea and pouring water but also about emotional companionship. Communication between humans contains a large amount of non - verbal information, such as micro - expressions, eye contact, and subtle pauses in body language. The current robot technology, the more it pursues anthropomorphism, the more likely it is to show weakness in these details. A robot that is 90% human - like can instantly ruin a warm atmosphere and turn it into a "horror movie" scene if it is just 0.1 seconds late in eye contact or if the muscle movement of its smile is slightly unnatural.
Therefore, full - sized, highly realistic humanoid robots will be difficult to truly enter family life for a long time in the future.
Two more realistic paths: Either "stop looking human" or "go to places humans don't go"
Since the "uncanny valley" is a genetic wall that is difficult to cross, where exactly is the future of humanoid robots? This may point to two completely different paths.
The first path is to take a step back and make a wide berth. Completely move towards "de - anthropomorphism" and "super - cuteness." Since "looking human but not quite" is the scariest, it's better not to look human at all.
The correct design idea for family service robots should not be the hosts in "Westworld" but rather Baymax in "Big Hero 6" or R2 - D2 in "Star Wars." What we need is for the robot's "hands" to be as flexible as a human's for washing dishes and making beds, not for its "face" to look human. In design, it can be shorter and rounder, using fabric covers instead of realistic silicone. It can have eyes, but preferably big cartoon - like eyes rather than realistic eyeballs.
The underlying logic is to actively avoid the downward area of the uncanny valley curve by emphasizing its "machine attribute" or "cartoon attribute" and to build a sense of security by leveraging humans' love for "neoteny." If Tesla's Optimus enters families, it's best to replace that black mask with a cute display screen and reduce its height to below 1.5 meters to eliminate the sense of oppression.
The second path is to become a tool for hardcore scenarios and move towards industry and the universe. If one insists on making full - sized, high - performance humanoid robots, its destination should not be the living room but rather extreme environments where emotional interaction is not required.
In scenarios such as automobile assembly lines, handling of dangerous chemicals, and nuclear power plant maintenance, the environment is designed for humans, with specific stair heights, valve heights, and passage widths. Humanoid robots have two feet and two hands, so they can directly reuse the human working environment without the need to transform the factory. Here, no one cares whether it looks scary or not; they only care about its work efficiency and stability.
A more grand narrative lies in "the stars and the sea." Elon Musk's ultimate dream of building robots may not be to wash dishes for you but to colonize Mars. On the surface of Mars and during extravehicular activities in space stations, environments that are fatal to carbon - based life, such as vacuum, radiation, and extreme cold, are paradises for silicon - based humanoid robots. As surrogates for humans, they can complete feats that carbon - based bodies cannot achieve.
Mass production is just the beginning; crossing the "trust valley" means entering the market
Humanoid robots are definitely worth getting excited about—they are a combination of AI, machinery, materials, control, and supply - chain capabilities. However, from the perspective of industrialization, beyond the technical route, there is another equally tough boundary: humans' instinctive alertness towards things that "resemble humans but seem off."
In the future, there may be very smart robots in families, but they probably won't look like the neighbors next door, nor will they try to gain trust with a "perfect human face." More likely, they will be capable assistants with a controllable presence and an explicitly "non - human" appearance. And those more human - like steel bodies may be more suitable to stay in factories, mines, nuclear power plants, or on the surfaces of more distant planets—where people don't require them to "look human" but only to be "reliable."