HomeArticle

LeCun exposes the biggest scam in the robotics industry and clarifies that he has nothing to do with Llama.

新智元2025-10-26 17:21
In a public speech, LeCun mercilessly exposed the truth: The so - called robotics industry is still far from true intelligence! These words were like a depth charge, instantly igniting the fire. Executives from Tesla and Figure have come back with online rebuttals one after another.

Have humanoid robots become the biggest scam in the AI circle?

Recently, at a lecture at MIT, Meta's Chief AI Scientist, LeCun, revealed the biggest secret in the robotics world -

These companies simply have no idea how to make robots "smart" enough, or to reach the level of general intelligence.

The realization of household robots still requires a series of breakthroughs in the field of AI.

Robots can perform tasks like screwing and moving goods in factories through specific task training. However, it is still difficult for them to fold clothes, pour water, and understand human intentions at home.

He compares this gap to the chasm between "narrow intelligence" and "general intelligence."

The core of the breakthrough lies in creating a truly plannable "world model" architecture, which is a system capable of learning, understanding, and predicting the physical world.

Unexpectedly, LeCun's remarks stirred up a hornet's nest, pouring cold water on this frenzy and provoking angry responses from robotics industry leaders.

Julian Ibarz, the head of Tesla's Optimus AI, straightforwardly stated that he disagreed with LeCun's view.

Internally, Tesla has a very clear idea of how to quickly achieve general humanoid robots.

Brett Adcock, the founder of Figure, directly called out, "Someone should tell LeCun to stop being so high - and - mighty and get down to some real work!"

Yann LeCun:

LLMs only have good memory and are less intelligent than cats

Yann LeCun has always been ahead of the mainstream cognitive trends of the era, and he seems to be right every time.

In 1987, he obtained his doctorate from the current Sorbonne University, and the English title of his thesis was "Connectionist Learning Models".

The core of the thesis was to establish the theoretical basis for the backpropagation algorithm in neural networks.

At that time, most people were still researching expert systems.

How did he come up with this research direction? And how did it affect his future career development?

During his speech at MIT, Yann LeCun recalled how he embarked on the path of artificial intelligence research.

When he was in college, he accidentally discovered that as early as the 1950s - 1960s, including 1981 Nobel laureates David H. Hubel and Torsten N. Wiesel, had already started thinking about the issue of "self - organization" - that is, how a system can self - organize and learn.

This direction later gave rise to the early idea that "machines can learn".

He found this idea extremely fascinating, and at that time, he was "a young upstart with no fear" -

I've always believed that biology provides a lot of inspiration for engineering. In nature, all living things have the ability to adapt, and as long as they have a nervous system, they can learn.

So, I thought at that time that maybe we humans aren't that smart. The most reliable way to build an intelligent system might be to let it learn to become smart on its own.

Perhaps it was this "reckless" attitude that led him to the path of machine learning.

He admitted that "machine learning" was not the mainstream in AI research at that time.

Since almost no one was engaged in related research at that time, he had a hard time finding a doctoral supervisor.

Later, he collaborated with Geoffrey Hinton and then worked at Bell Labs and New York University (NYU).

The field of artificial intelligence experienced a "winter" from the 1990s to the 2000s. However, in 2013, LeCun joined Facebook and founded FAIR (Facebook AI Research), promoting the term "deep learning" to replace "neural networks", which marked the industrial circle's systematic acceptance of this paradigm.

In 2018, due to his breakthrough contributions in the fields of concept and engineering, which made deep neural networks a key component of computing technology, he shared the Turing Award with Bengio and Hinton.

By the way, when Yann LeCun visited Tsinghua University, he decided on his Chinese name, "Yang Likun".

But this time, Yann LeCun said bluntly, "LLMs are a dead end. The world model is the right path."

He pointed out that text is a "low - bandwidth" data source, and "training only with text can never achieve human - level intelligence." True intelligence comes from high - bandwidth perceptual inputs - multi - modal experiences such as vision, hearing, and touch, rather than low - dimensional discrete symbols.

He compared the trillions of tokenized words required for LLM training with the massive sensory data processed by children:

The amount of data received by a four - year - old child through vision is equivalent to the data volume of the largest - scale LLM trained on all public texts.

He further pointed out that although LLMs can sometimes provide useful results, even making people mistake them for having an "IQ comparable to that of a doctor", these systems are just "recalling" information from training.

LeCun pointed out that large language models (LLMs) have an inherent bottleneck - although they formally replace explicit coding through "learning", they still rely on the indirect transfer of human knowledge.

LLMs do not possess any real - world intelligence - they are even less intelligent than a cat.

He emphasized that even though a cat's brain contains only about 280 million neurons, its ability to understand the physical world and plan actions far exceeds that of current AI systems.

A cat can perceive three - dimensional space, judge the stability of objects, and plan complex actions, which current generative models are unable to achieve.

Therefore, the question he is really concerned about is: How can machines learn the model of the physical world?

People with a bit of sense don't use LLMs anymore

The world model has become synonymous with LeCun.

During a conversation, he gave another definition of the "world model" -

Given the state of the world at time t and a possible action of an agent, predict the environment after the action is executed.

For example, if you ask a robot to make a cup of coffee, it needs to imagine a series of actions - picking up the cup, pouring water, stirring - and predict the result of each step.

Once a system is equipped with such a world model, it can make plans:

Imagine a series of consecutive actions and use the model to predict the results of these actions.

At the same time, the system can combine a "cost function" to evaluate the completion of a specific task.

On this basis, optimization methods can be used to search for the optimal sequence of actions that can optimize the task goal. This process is called "planning and optimal control".

LeCun said that the "environmental dynamics model" used by the team is completely self - supervised learning, which is also the core of the current method.

Experiments have shown that it can be achieved using the representation of the world state - from the existing model DINO, whether learning from scratch or based on frameworks such as V - JEPA 2.

Robots don't need to be repeatedly trained for specific tasks. They can complete new tasks without prior examples just by learning the "action - result" relationship from simulated data or real operations.

This training is completely self - supervised.

When a system has a good enough world model, it can "imagine" how to complete a task it has never been trained for.

LeCun introduced this concept to the world in his keynote speech at the 2016 NeurIPS conference -

The world model will become a key component of future AI systems.

LeCun predicted that "in the next 3 - 5 years, this will become the mainstream model of AI architecture."

This statement has offended quite a few people in Silicon Valley, including some giant companies.

By then, anyone with a clear head won't use the current generative LLM approach anymore.

The host then asked, so can this promote robotics and make the next decade truly the era of robots?

LeCun said bluntly that in the past few years, startups aiming to build "humanoid robots" have emerged like mushrooms after rain.

But a big secret in the industry is that they still don't know how to make robots truly "smart" enough for practical use.

So the future of many companies valued at billions basically depends on whether they can make significant progress in the "world model + planning" architecture.

LeCun got more and more excited, and his views were obviously quite "bold".

The host quickly changed the topic to smooth things over, "It doesn't matter. We're not worried about those companies. And seriously, we really believe in the entrepreneurial spirit."

The industry is taking action: The world model of robots

Yann LeCun's "calmness" stands in sharp contrast to the aggressive timelines advocated by many industry leaders.

Figure AI has been particularly aggressive. Its CEO, Brett Adcock, recently claimed:

Next year, humanoid robots will be able to perform various general tasks in unfamiliar environments (such as homes they've never entered) through voice commands.

The founder explained that his confidence comes from the company's efforts to solve software and intelligence problems.

A humanoid robot has 40 degrees of freedom (joints), and the number of possible pose combinations even exceeds the total number of atoms in the universe.

Brett Adcock emphasized that "this problem cannot be solved by programming. The only way is through neural networks."

He compared Figure's technical approach with