StartseiteArtikel

Demis Hassabis: To Achieve AGI, First Accomplish Two Things

AI深度研究员2025-12-17 09:00
DeepMind: AGI Requires a Closed Loop of World Models and Automated Experiments

On December 16, 2025, the Google DeepMind podcast released the last episode of this season. It's a conversation with Demis Hassabis, lasting over 50 minutes.

It's neither a product launch event nor a product review.

The opening line sets the tone: look beyond the product launches. The conversation focuses on the two most fundamental things in the next decade.

Hassabis said that to achieve AGI, two things must be accomplished first:

One is the world model, which enables AI to truly understand physics and space;

The other is automated experimentation, which allows AI to solve fundamental problems such as materials and fusion through hands - on work.

More importantly, these two things must be connected to form a complete scientific research closed - loop: AI can ask questions, verify, and iterate on its own.

Hassabis believes that AGI is not the end of generative models but the starting point of a scientific research closed - loop.

Section 1 | World Model: AI Should Not Only Understand Sentences but Also Comprehend the World

Hassabis said that the world model has always been his core concern. This is not a new idea, but by 2025, it has to be done.

In the past few years, language models seem to be omnipotent, capable of writing, answering, and summarizing. Hassabis admits that language contains more world information than expected, even more than what linguists imagined. However, he points out a contradiction: these models can win gold medals in the International Mathematical Olympiad but may make mistakes in primary school geometry problems; they can generate amazing images but don't understand why a cup doesn't float in the air.

Where lies the problem? They lack a world model.

The so - called world model is the AI's intuitive understanding ability of physical reality, such as what can be poured, what can move, how things change, how space is structured, and how time passes.

More crucially, many things cannot be described in language at all: sensor data, motor angles, smells, and tactile sensations. Humans learn these through their bodies from a young age, but language models have only read books and have no contact with the physical world.

DeepMind's solutions are several products:

  • Veo: Understand motion, liquid flow, and light changes in videos
  • Genie: Generate an interactive game world out of thin air, with spatial structure and physical feedback
  • Sima: Enable AI to act as an avatar to perform tasks in a virtual environment and develop a chain of perception, action, and reaction abilities

Genie and Sima can interact with each other. Genie generates the world, and Sima explores it. The two AIs form a training closed - loop. This may allow AI to automatically set and solve tasks with increasing difficulty without human intervention. This is DeepMind's second attempt to enable AI to evolve on its own since AlphaGo.

However, Hassabis also admits that these models currently only "seem real".

If tested with Newton's three laws, it will be found that they are only approximations. This level of accuracy is not enough for robots. DeepMind is using game engines to create physical benchmarks, just like conducting high - school physics experiments, to test whether AI really understands the operating laws of the world.

If you can simulate the world, it means you really understand it.

This also explains why the world model is a prerequisite for AGI. The goal of AGI is not a better chatbot but an intelligent agent that can act in the physical world.

From robots to AR assistants to ultimate games, all these require AI to first understand how the physical world works.

In short, the world model is the only way for AI to step out of the pure digital space.

Section 2 | Automated Experimentation: AI Should Not Only Talk the Talk but Also Walk the Walk

Language models can tell stories, and world models can build environments, but the step that really allows AI to participate in reality is experimentation.

Hassabis said that when they developed AlphaFold, they wanted to prove one thing: AI is not just a tool; it can become a real participant in scientific research.

Now, DeepMind is expanding on this.

(CNBC: DeepMind Establishes Its First Fully Automated Laboratory in the UK)

On December 10, 2025, DeepMind reached a cooperation agreement with the UK government to establish its first fully automated scientific laboratory in 2026. This is a scientific research engine designed from scratch and fully integrated with Gemini. It can synthesize and test hundreds of materials every day, supervised by a multidisciplinary research team. However, the execution of experiments, data analysis, and direction adjustment are mainly completed by AI and robots.

The research directions focus on several tough challenges:

  1. More efficient battery materials
  2. Room - temperature superconductors
  3. New - generation low - loss semiconductors

These are not problems that can be solved by simply generating an answer from a model. One really has to enter the laboratory, interact with substances, and iterate through trial and error.

What's the difference compared with AlphaFold?

AlphaFold proved that AI can make predictions. It uses computing power to exhaust all possible folding ways of proteins and outputs a digital answer.

The automated laboratory aims to prove that AI can verify. It has to actually synthesize substances, measure performance, discover problems, and improve formulas. The former is a breakthrough in the digital world, and the latter is a breakthrough in the physical world.

Hassabis said that the significance of this step is not only to improve efficiency but also to allow AI to truly enter the internal processes of science. In the past, AI assisted in peripheral scientific research work: literature summarization, image recognition, and data annotation. Now, it starts to participate in hypothesis formulation, experimental design, data verification, and can even correct the initial research ideas.

Materials science is the most suitable field for this.

Because it requires a large amount of trial and error (a new material formula may need to be tested thousands of times) and has clear verification standards (by measuring resistance, strength, and melting point, one can tell whether it works). This makes AI's autonomous experimentation possible.

Speed is the key. The room - temperature superconductors and fusion materials mentioned by Hassabis are problems that have plagued humanity for decades. It's not because the theory is insufficient but because the trial - and - error process is too slow. If AI can increase the material screening speed by 100 times, the energy revolution may only take 10 years.

In addition to the automated laboratory, DeepMind is also collaborating with the US fusion technology developer Commonwealth Fusion Systems to use AI to help control the plasma in a tokamak reactor. This is the last hurdle for the commercialization of nuclear fusion.

In Hassabis's words: The prerequisite for AGI is not to be smarter but to be more capable of taking action.

Section 3 | The Closed - Loop Is the Key: AI Should Be Able to Ask Questions, Take Action, and Reason on Its Own

The previous two sections talked about two things: the world model enables AI to understand the world, and automated experimentation allows AI to verify through hands - on work. However, what really makes AGI possible is not how powerful they are individually but whether they can be connected to form a complete cognitive closed - loop.

Hassabis's exact words are: In the past, we trained answerers; now, we need to train researchers.

What does it mean?

The key lies in how to form a cycle between perception and action. DeepMind's approach is to connect Genie and Sima mentioned in Section 1.

Genie generates scenarios on - the - fly according to needs (such as an environment with changing gravity and friction)

Sima completes challenges in it (moving boxes, avoiding obstacles, and finding targets)

Whether the task fails or succeeds, it becomes material for AI's self - learning. The two AIs interact in each other's thinking without knowing who the other is. Genie doesn't know that Sima is another AI; it just treats Sima as a player. Sima also doesn't know that the world is created by an AI; it just focuses on completing the task.

This creates a potentially infinitely expandable training cycle: Whatever Sima wants to learn, Genie can create it immediately. You can automatically set and solve millions of tasks with increasing difficulty without any human intervention.

If you abstract this cycle, you will see a complete scientific research process:

  • Ask questions (What needs to be solved?)
  • Generate scenarios (Under what conditions to test?)
  • Execute tasks (Simulate, act, experiment)
  • Organize feedback (Data, conclusions, optimization)
  • Ask better questions (Iterate into the next round)

In the past, only scientists carried out this process. Now, AI is starting to have similar capabilities.

This cycle is not just for training better models. Hassabis mentioned that the same technology can be used to create more intelligent game NPCs and train robots. Because the capabilities required by robots highly overlap with those of game intelligent agents: perceiving the environment, planning paths, executing actions, and learning from failures.

The virtual closed - loop formed by Genie + Sima and the automated laboratory mentioned in Section 2 constitute two parallel autonomous research systems: one runs through the logic in the digital world, and the other verifies hypotheses in the physical world.

So AGI is not just a larger model but an intelligent agent that can generate tasks, verify through hands - on work, and reason and update on its own.

In short, it must be able to work like a researcher.

Conclusion | The Door to AGI Is Not in the Parameters

The path to AGI proposed by Hassabis does not rely on larger models or stronger computing power but on the AI's ability to truly "understand the world" and "change the world".

The world model is the foundation, enabling AI to see the cause - and - effect relationships;

Automated experimentation is the means, allowing AI to verify its cognition.

This is not model optimization but a reconstruction of intelligence.

In the future, AI will ask questions, conduct experiments, and correct itself. By then, we may have to rewrite our definitions of knowledge, science, and even thinking.

📮 Original Article Links:

https://www.youtube.com/watch?v=PqVbypvxDto&t=3s

https://x.com/GoogleDeepMind/status/2000985655715807599

https://deepmind.google/blog/strengthening - our - partnership - with - the - uk - government - to - support - prosperity - and - security - in - the - ai - era/?referrer = grok.com

https://www.cnbc.com/2025/12/11/googles - ai - unit - deepmind - announces - uk - automated - research - lab.html

This article is from the WeChat official account "AI Deep Researcher", written by AI Deep Researcher and published by 36Kr with authorization.