HomeArticle

DeepMind CEO calculated four accounts: In this round of AI competition, where exactly is the money being spent?

AI深度研究员2026-01-18 10:15
In this round of AI competition, where exactly should the money be spent?

The hottest buzzword in the AI circle lately isn't about being more powerful, but about being more profitable.

On January 16, 2026, CNBC launched a new podcast called "The Tech Download" with a clear positioning: it focuses solely on money, not concepts. The guest on the first episode was Demis Hassabis, the CEO of Google DeepMind.

Hassabis didn't talk about technical concepts but instead calculated four investment accounts:

What to invest in for the lacking capabilities of AGI

Where the costs lie in model commercialization

Where to allocate resources to overcome the energy bottleneck

Where to build advantages in the AI competition

These four accounts all point to the same core: In this round of the AI race, where exactly should the money be spent?

First Account | What Capabilities Does AGI Still Lack?

At the beginning of the exclusive interview, the host asked a question that everyone is concerned about: Our large models are already so powerful. Can they get even better? Is AGI around the corner?

Hassabis's answer was that large models actually have obvious shortcomings in their capabilities.

He said that these AI tools can give amazing performances on certain questions, but if you change the way of asking or make it a bit more complex, they immediately fail.

He called this "jagged intelligences."

To put it simply, this kind of intelligence isn't reliable enough. It can answer questions but can't draw inferences from one instance; it can write papers but can't come up with a truly new idea on its own.

1. General intelligence should be able to pose questions on its own

Hassabis believes that true general AI must possess the ability to pose questions on its own, hypothesize how the world might work, and then find ways to verify it.

In other words, it can't just answer your questions but also think about what the questions are on its own.

He said that current large models can't even achieve continuous learning. Once you teach it something new, it forgets it quickly; it doesn't accumulate experience like humans do.

This is why, in the past two years, DeepMind has started to shift its focus from LLMs to another direction: creating an AI that understands how the world works.

2. A world model isn't about understanding language but being able to imagine

He explained the concept of a world model in a very plain way:

"Just like human scientists who can simulate in their minds what would happen if things were this way, AI also needs to have this ability."

It's not about understanding what you say but being able to predict what will happen next and what will affect what results based on its own understanding of the world.

This may sound a bit abstract, but it has been implemented in several core directions at DeepMind:

Genie: A model that can interact with virtual environments, which is equivalent to understanding the rules while playing a game

AlphaFold: Using AI to predict protein folding structures back then was actually about making the model understand why the shapes are the way they are

Veo: Generating videos from text. It's not just about piecing together shots but about letting AI decide what the next - second frame should be based on cause - and - effect relationships

These seemingly different projects are actually doing the same thing: making AI understand the world like humans do, rather than just memorizing answers.

3. AGI doesn't rely on emergence but on combination

Hassabis believes that simply expanding the model scale won't automatically generate general intelligence.

What's truly likely to create AGI is to have multiple models perform their respective duties and work in collaboration:

LLMs are responsible for language and basic understanding

Video models are responsible for time series and physical intuition

World models provide the ability for simulation, reasoning, and prediction

Only when these ability puzzles are gradually put together will general intelligence be reliable, rather than seemingly smart but full of flaws.

For most people, AGI is an AI smarter than humans;

but for Hassabis, AGI is an AI that can come up with new ideas on its own.

This is why DeepMind regards the world model as the main focus for the next step. It's not just a new model but a core ability: the ability to understand from the world's perspective rather than passively answer questions.

Second Account | How to Make Money from Models? It's Not about Being Stronger but More Cost - Effective

The technical route is one thing, but commercial implementation is another.

For AI to become general - purpose, it not only needs to be smarter but also affordable.

Demis Hassabis talked about DeepMind's product strategy: Instead of just promoting the Pro version, they also develop the Flash version. This isn't about the difference between large and small models but about making it usable in more scenarios.

Models that can be deployed on a large scale and cover various scenarios must be lightweight, fast, and cost - saving.

1. Flash: Train the main model with a strong model

Hassabis described it as training a more efficient version with the strongest model, just like using the brain to train a more dexterous clone.

This process is technically called distillation, but what he focuses more on is not the technology itself but whether it can be implemented: The trained model can be widely deployed and become the main - used version.

For example, the Gemini model line:

The Pro version is prepared for complex scenarios or cutting - edge applications

The Flash version provides services for end - users and high - frequency tasks

2. Commercialization isn't about selling models but integrating models into products

"AI shouldn't stay in web dialog boxes forever."

Hassabis said that one of the directions he's most optimistic about in the future is to make AI truly enter devices like mobile phones and glasses. That is, in the future, you won't have to look for AI; instead, AI will be right at your fingertips, on your screen, and in your daily actions.

DeepMind has already collaborated with brands like Samsung and Warby Parker to explore the feasibility of device - side AI.

This shows that DeepMind's commercial route isn't just about API sales but also emphasizes the in - depth integration of models and products.

3. AI should not only save manpower but also save resources

Hassabis said that efficiency is the top priority in the design of the entire Gemini line, especially the Flash series.

Faster inference

More balanced capabilities

Lower energy consumption

DeepMind's view on AI commercialization isn't about competing on features but calculating the total cost: What a model can do isn't important. What matters is that it can be cost - controllable, implementable, and stable and reliable.

From the design of Flash, the distillation strategy, to device - side cooperation and energy - efficiency priority, what Hassabis presents isn't a model roadmap but a usage roadmap.

He doesn't emphasize how powerful the model is but focuses on how to make AI be truly used.

This is the foundation for the start of commercialization.

Third Account | Can AI Solve the Energy Problem on Its Own?

The Flash version addresses the energy consumption of the model itself, but that's not enough.

Demis Hassabis clearly stated that as we move towards AGI, energy will be equivalent to intelligence. The stronger the intelligence, the greater the power consumption. This is an inescapable physical law.

1. AI doesn't lack models; what it lacks most is electricity

There are never enough chips. Hassabis said bluntly that even though Google has its own TPU series and GPUs, the global computing chips are still in short supply.

Tracing back to the root, the real bottleneck is energy:

No matter how many GPUs there are, they still need electricity to run

No matter how large the data center is, it's limited by the power supply

No matter how powerful the model is, if the cost can't be reduced, it can only stay in the laboratory

This isn't just Google's problem but the ceiling for the entire industry. When every company is competing to expand computing power and train more powerful models, whether the energy supply can keep up determines who can truly bring AGI from the laboratory to the real world.

And this is becoming a key obstacle to the large - scale application of AGI.

2. Another ambition of DeepMind: Use AI to find energy

If AGI requires a huge amount of energy, then let AI solve the problem itself. DeepMind's strategy has two directions.

Open source: Produce new energy

  1. Collaborate with the US Commonwealth Fusion company to use AI to control the plasma in a nuclear fusion reactor. Once nuclear fusion is achieved, it will provide almost infinite clean energy.
  2. Hassabis's personal project: Can AI find room - temperature superconducting materials? If successful, it will completely change the way of power transmission and storage.
  3. Redesign solar materials to significantly improve energy conversion efficiency.

Throttle: Improve energy efficiency

  1. Optimize the operating efficiency of power grids, data centers, and energy systems to reduce waste
  2. Look for new crystal structures to reduce energy consumption
  3. Help industries optimize production paths to reduce unnecessary energy consumption

AI not only consumes resources but can also improve resource efficiency.

This isn't the first time. From AlphaFold predicting protein structures to now looking for energy breakthroughs, Hassabis has always believed that AI is the ultimate tool for scientific discovery.

When every company and enterprise wants to deploy their own large models, the key to competition has changed:

  • Who can make AI more energy - efficient can deploy on a larger scale
  • Who can make better use of every kilowatt - hour of electricity can survive longer

Ultimately, it's not about who is smarter but about who is more cost - effective.

Whether the energy supply can keep up determines how far this technological upgrade can go. And DeepMind's answer is to let AI solve the energy problem on its own.

Fourth Account | The Key to Competition: Integration, Deployment, and Survival

Beyond the technical route, there's also the competitive landscape.

In the past few years, OpenAI has led in the consumer market. With ChatGPT, it quickly tied up with Microsoft and launched APIs, plugins, and the GPTs store.

Google seemed to be half a step behind.

But at the end of 2025, the situation changed. When Gemini 3 was launched, it simultaneously entered Google Search, Android systems, Gmail, Workspace... and was rolled out across the board.

Hassabis revealed that in the past two or three years, the biggest change he made wasn't in the R & D direction but in internal integration.

1. DeepMind: From a research institute to an engine room

In the past three years, Hassabis only focused on one thing: integrating the three teams of Google Research, Google Brain, and DeepMind into Google DeepMind.

This isn't just about team integration but also about rebuilding Google's entire AI infrastructure.

The result of the integration:

All AI technologies are developed uniformly by DeepMind

Once the technology is completed, it is directly spread to all Google products

Hassabis and Sundar Pichai (Google's CEO) have almost daily conversations to decide on the technical direction and product configuration

In the past, the three teams worked on AI separately, with overlapping routes and scattered resources. Now, it's like an engine room with unified scheduling.

More importantly, it's about speed. Hassabis said that they adjust the roadmap and plans every day. This isn't the steady approach of a large company but the sprint rhythm of a startup team.

The only goal is to achieve AGI quickly and safely.

The release efficiency of Google's AI products has been qualitatively improved.

2. A strong model needs to be deployed faster

To achieve rapid deployment, DeepMind established a "backbone network" to allow AI technologies to spread quickly to all Google products.

Hassabis described the release rhythm of Gemini 3 as simultaneous launch:

Once the model training is completed, it can be launched on Search, Gmail, and Workspace the next day

There's no need for secondary modification or cross - team communication; it's a one - step process

This wasn't possible before.

Hassabis said that they only truly entered this state with Gemini 2.5. Before that, there was still a lot of connection work between the model and the products.

This efficiency comes from two advantages:

First, DeepMind has a complete technology stack from chips to models. It has technological autonomy and doesn't need to wait for external cooperation.

Second, Google's product matrix is already a ready - made platform. Search, Android, Chrome, YouTube... AI capabilities can be immediately connected and pushed to billions of users simultaneously.

While OpenAI is still negotiating partnerships one by one, Google has completed the deployment.

Hassabis said that in the next 12 months, AI capabilities will spread to more Google products.

3. What does being a few months behind mean for Chinese AI?

When talking about China's AI development, Hassabis believes that China's leading laboratories may only be a few months behind.

This means that the gap is rapidly narrowing in terms of training efficiency, model capabilities, and deployment speed.

DeepSeek's low - cost training plan and Alibaba's open - source model have all demonstrated the engineering capabilities and catching - up speed of Chinese teams.

At the same time, Hassabis also pointed out the key for the next stage: from replicating technology to making original breakthroughs.

He believes that inventing a new technology may be 100 times more difficult than replicating it. Chinese laboratories have already proven their replication ability. The next question is: Can they create a new architecture or method like the invention of Transformer back then?

This isn't just a question for China but also a challenge for all AI laboratories.

For all AI players who want to win, Hassabis pointed out:

It's not about who releases more but about who can make the products actually work

It's not about who raises more funds but about who can survive after the bubble

OpenAI is under great pressure, Anthropic's products are also developing rapidly, and Chinese models are truly catching up. But DeepMind's approach isn't to deal with them separately but to integrate its advantages: a unified product line, its own platform, and one - step deployment.

In this long - distance race of AI, survival is more important than speed.

Conclusion | Where to Spend the Money in This Round of the AI Race

Demis Hassabis gave four directions:

Technologically, invest in the ability to understand the world and come up with new ideas, not just pile up data

Commercially, invest in the deployment efficiency of models, not just pursue performance

Resource - wise, invest in energy technology and energy - efficiency optimization. The scale of intelligence depends on the value of each watt of electricity

Competitively, invest in integration capabilities and product closed - loops, not just the release speed

DeepMind has given its own answers to these four accounts.

How other players choose will determine how far they can go.

📮 Reference materials:

https://www.youtube.com/watch?v=q6fq4_uP7aM&t=2s

https://podwise.ai/dashboard/episodes/6844347

https://www.linkedin.com/posts/arjunkharpal_the-man - behind - googles - ai - machine - watch - activity - 7417829969470545920 - pMXG

Source: Official media/Online news

Typesetting: Atlas

Editor: Shensi

Chief editor: Turing

--END--

This article is from the WeChat official account