HomeArticle

Demis Hassabis: ChatGPT has led AI astray.

字母AI2026-04-10 20:08
We haven't even fully figured out "what time is".

We might have traded the chance to cure cancer for a chatbot.

This is not a conspiracy theory but the exact logic of Demis Hassabis.

This Nobel laureate, CEO of Google DeepMind, and creator of AlphaFold, when asked about the moment ChatGPT was released, gave an answer that could almost be called "against the industry consensus":

"If it were up to me, I'd keep AI in the lab for a longer time and have it do more things like AlphaFold - maybe cure cancer and such."

But the reality is that the explosion of products like ChatGPT has plunged the entire AI industry into high - speed competition.

The above content is from an interview published by Huge Conversations on April 7, 2026. In this conversation, Hassabis clarified four things:

Where AI truly changes the world

How AI has deviated from its original path

The real risks that need to be worried about

How humans should respond

Below are the most notable parts of this conversation.

01 Where AI truly changes the world is hard for us to see

If you're not in the relevant industry, most people's impression of AI still stays on chatbots, writing assistants, or image - generating tools.

Hassabis mentioned a fact that's easily overlooked in this interview: The more important applications of AI actually happen outside these products.

The truly important changes occur at a level far from daily life, in laboratories, in databases, and in scientific problems that most people have never encountered.

The most typical example is AlphaFold. It's a system developed by Hassabis and his team at DeepMind, aiming to predict the final three - dimensional structure of a protein based solely on its amino - acid sequence.

You can think of it this way: The structure of a protein determines its function in the human body, and the function determines how diseases occur and how drugs work.

Of course, the actual situation is much more complicated, and we won't go into details here.

In the past, for scientists to figure out the structure of a protein, it would take years of repeated attempts in the laboratory, with costs often reaching hundreds of thousands of dollars or even more.

Many proteins are so complex in structure that it's almost impossible to decipher them - seriously, no joke.

But AlphaFold has turned this into a computational problem. By inputting a sequence, you can get a highly reliable three - dimensional structure prediction in just a few seconds.

DeepMind could have followed the common practice in the industry and provided an online service, where scientists submit a protein sequence, the system runs a calculation, and then returns the result.

But in an internal meeting, Hassabis suddenly realized that instead of calculating on demand, it would be better to calculate all the known proteins in nature.

So, under his leadership, DeepMind calculated about 200 million protein structures in batches and made them freely available to the world.

In a sense, we can consider this a public - welfare project. After all, this approach means that the field of structural biology suddenly has an infrastructure that can be called upon at any time.

Hassabis explained that more than 3 million scientists are using AlphaFold today. For many researchers, it's no longer just a "tool" but more like a default prerequisite.

In drug research and development, AlphaFold has changed the starting point of the entire process: In the past, the path was to make repeated trial - and - error attempts in the laboratory, but now, a large amount of trial - and - error has been moved to the computer in advance.

In the past, researchers needed to first identify a possible target and then design a molecule to "stick" to this protein. This process relied on a large number of wet experiments: make a molecule, test it; if it doesn't work, make a small change and test it again.

But after the intervention of AI, this logic began to change.

In Isomorphic Labs, a drug company spun off from DeepMind, this process has been reorganized into a "computation - first" model: AI first generates a large number of candidate molecules in the computer, predicts their binding effects with the target protein, and at the same time quickly checks whether these molecules will harm other proteins in the human body and what side - effects they might bring...

Then, based on this feedback, the molecular structure is continuously adjusted, and the next round of search begins.

The entire process has become a high - frequency iterative search. The trial - and - error that originally took a lot of time and resources in the laboratory has been compressed into multiple rounds of computer calculations.

Wet experiments haven't disappeared; they've just been pushed to the last step of the process: Only a few of the most promising candidate molecules will actually enter experimental verification.

The R & D cycle of a drug in the traditional path is about 10 years, with a success rate of only about 10%. And this computation - centered approach, at least in theory, has the opportunity to change both of these figures.

Hassabis' own judgment is that from now on, almost all new drug R & D processes will use AI to some extent.

In his view, this is how AI is most likely to change the world. It won't appear in the form of any popular product, nor will it constantly remind you of its existence on your phone screen.

It's more like an underlying system that, once built, will quietly change the way the entire field operates.

That is to say, if we only look at chatbots, we may only be seeing the least important part of AI.

02 AI is being "pushed forward"

If we follow Hassabis' own vision, the development path of AI could have been different, slower and more "scientific".

He gave a rather rare statement in the interview: If it were up to him, he would keep AI in the laboratory for 10 or even 20 more years and advance it like a large - scale scientific project.

He used CERN (European Organization for Nuclear Research, the world's largest particle physics research institution) as a reference, where the world's best scientists are organized to break down problems step by step and establish a clear understanding of each key link, rather than pushing forward quickly without a full understanding.

On this path, the goal of AI is not to produce products as soon as possible but to prioritize solving the most fundamental and crucial scientific problems - AlphaFold is a typical example of this thinking.

In his vision, these "slow and in - depth" breakthroughs can continuously bring returns to humanity on the way to AGI (Artificial General Intelligence).

But the reality is not like this.

Hassabis' explanation is straightforward: The development of technology often doesn't follow the expected path.

One of the key turning points is the breakthrough in language models and the explosive spread brought by ChatGPT.

Language ability is much easier to solve than many people expected. An architecture like Transformer, combined with some reinforcement learning methods, is enough for the model to show amazing abilities in language, concepts, and abstract expressions.

ChatGPT was originally just a research experiment, but once it was released, it quickly became a global product.

It has changed the rhythm of the entire industry and turned AI into a fierce ongoing competition.

A large number of users have started to directly experience the most cutting - edge AI capabilities, and the market's focus has also shifted from long - term problems in the laboratory to product forms that can be quickly implemented.

Business competition has accelerated. Companies have to release new models more frequently, and the evolution of model capabilities has also become deeply tied to user growth and market feedback.

Hassabis doesn't completely deny this acceleration. He admits that this development method also brings several practical benefits: Capabilities that might have taken longer to be implemented can now enter the real world earlier.

Most of the AI that people use today is only a few months behind the internal laboratory version - which was almost unimaginable before. A large amount of real - world use also brings more diverse data. After all, no matter how perfect the internal testing is, it's difficult to cover the complex scenarios brought by millions of users.

But having benefits doesn't mean the path is ideal. It's more like a result driven by reality.

Hassabis' attitude is actually clear. He is a scientist and also an engineer. We can understand it as an idealist's compromise in the face of reality: He knows what the more ideal path is, but he also accepts that the world doesn't operate according to ideals.

The progress of technology is largely unpredictable. Once a certain direction achieves a breakthrough first, it will quickly attract resources, capital, and attention.

As a result, the capabilities that are easier to be productized are continuously magnified, while the scientific problems that might have been prioritized are temporarily put aside.

From this perspective, today's AI is not moving in the "most valuable" direction but is on a faster and more uncertain path driven by various forces.

03 The real risks are not deepfake but two bigger things

Most discussions about AI focus on one type of problem: deepfake, false information, and content distortion.

Note that here we're talking about most people, not industry insiders, but ordinary people who use AI.

Deepfake is indeed a problem, but in Hassabis' view, it's not the most worrying type.

He gave a very clear ranking in the interview:

The first type is the "human" problem. From individuals to countries, will they use the technologies originally intended for scientific research, medical care, and infrastructure in harmful ways?

This risk is not new, but AI has changed its scale and efficiency. An ability that originally had a small - scale impact, once magnified, can bring about completely different levels of consequences.

The second type is the problem of AI itself. To be precise, it's the uncertainty brought about by the change as AI evolves from a "tool" to a system capable of independently completing tasks.

Hassabis mentioned that today's systems don't have this ability yet, but in the next few years, as AI enters the so - called "agentic" stage (the stage where it can autonomously execute complete tasks), the problem will become more serious.

The key is not whether it's intelligent enough, but whether we can ensure that it always acts according to the established goals, doesn't bypass the rules, and doesn't deviate from the original intention during the execution process.

This is very difficult technically because the more intelligent the system is, the more shortcuts it can find, and these shortcuts may not necessarily meet the designers' initial expectations.

Hassabis believes that these two types of risks are the key issues to be faced in the next few years.

In contrast, the currently most - discussed deepfake and false information are more like "problems that have already occurred". They need to be solved, and there are relatively clear technical paths, such as using a watermark system to mark AI - generated content: DeepMind has developed a similar technology (SynthID) internally to identify and track the source of generated content.

If we view the entire risk structure as a timeline: In the short term, we're facing chaos at the information level; in the medium term, the more serious problem is the loss of control at the capability level.

As for the long - term, it's still too early to talk about it (bushi).

In this sense, Hassabis pointed out: What really needs to be focused on is not what AI can say but what it can do.

When AI starts to move from "answering questions" to "executing tasks", the nature of the risk will also change.

In other words, many of the risks we're talking about only occur at the information level, while what we really need to be vigilant about is the approaching "actions".

This sounds a bit like science fiction. Many people may have imagined that one day, a super - intelligent being awakens "self - awareness" and then replaces or even rules over humans.

Hassabis himself said that he has read a lot of science fiction. His favorite is Iain Banks' Culture series, which is set in a post - AGI world, and the story takes place a thousand years later. But he thinks some of the plots may come true within 50 years.

However, his own vision is relatively optimistic: The risks are resolved, humanity safely passes through the AGI moment, AGI is in everyone's pocket, it's beneficial to society, and it's used to tackle what he calls "the root - node problems in science", such as energy, medicine, and materials.

04 Immerse yourself in every available AI tool

If we take the previous discussion a step further, it's easy to arrive at a question: When AI starts to participate in scientific discovery, decision - making, and even task execution, what's left for humans?

In other words, "Why are humans special?"

This question was very candidly raised in the second half of the interview: The host said that she found herself doing something that humans have repeatedly done throughout history, trying to find a reason to prove that "we're special".

We once thought the Earth was at the center of the universe, but then we found out it wasn't; we thought only humans could mourn, but then we found that elephants can too; we thought only humans could create art, but now, AI can also draw, write, and compose music.

Every time a boundary is broken, humans will ask this question again: Why are we special?

Regarding this question, Hassabis didn't directly give an answer. I think this question is also difficult to answer simply.

He mentioned a classic computational theory framework: the Turing machine.

Theoretically, a general - purpose computer can calculate any "computable" problem; in the understanding of many neuroscientists, the human brain itself can also be regarded as an approximate computational system.

If this premise holds, then the human brain and the AI systems we're building are, in a sense, of the same kind. That's why AI can continuously approach and even exceed human performance in certain abilities.

And if intelligence itself can be replicated, then what's really worth asking may no longer be "what's the difference between us and AI" but "what are we really trying to understand?"

Hassabis mentioned in the interview that his favorite subject when he was a child was actually physics.

What attracted him were not the applications but the most fundamental questions: What is time? What is consciousness? How does the universe work?

But these questions still don't have answers to this day.

One of his core motivations for doing AI is to use it as a tool to help humans understand these questions.

From this perspective, AI is not just a system that replaces human abilities but more like a tool used to expand the boundaries of cognition.

This also explains why, when talking about the future, his tone is quite optimistic: If the risks can be controlled