DeepMind CEO talks about two paths for AI: becoming a scientific tool or getting involved in the AGI race
In recent years, discussions about AI have mostly revolved around model rankings, product competition, financing valuations, or "who will develop AGI first."
However, in a recent interview with Google DeepMind CEO Demis Hassabis, what impressed me deeply was this: The most worthy directions for prioritized investment in AI are not entertainment, not gimmicks, and not even just productivity tools. Instead, they are scientific discovery, human health, and those things that would otherwise take decades to make even a little progress.
Demis is not only the founder of DeepMind but also the key figure behind the entire technological roadmap of AlphaGo, AlphaFold, and Gemini. He has a temperament that is becoming increasingly rare today. On the one hand, he is at the center of the AGI battlefield; on the other hand, he always tries to understand AI from a long - term perspective, not asking "what will be the next popular product," but "what problems this technology should ultimately solve for humanity."
I. In Demis' view, the best use of AI is to tackle scientific problems
Demis mentioned in the interview that he has regarded AI as an "ultimate tool" from an early stage, not to replace human sense of meaning, but to push human scientific and understanding capabilities to a new level. For him, the most exciting aspect of AI is not just its ability to write articles, code, or chat, and not even just automation. Instead, it can help humans discover patterns that were previously invisible and advance research that was previously unfeasible.
The most typical example of this idea is AlphaFold.
Protein structure prediction was once a decades - long problem in biology. The problem sounds very professional, but its significance is straightforward: if you don't know what a protein looks like in three - dimensional space, it's difficult to truly understand how it works in the human body, and it's also difficult to design drugs and understand disease mechanisms more quickly. In the past, researchers often had to invest a large amount of funds and years of time to measure a protein's structure experimentally. Demis said that what really excited him about AlphaFold is that it is a scientific breakthrough that only AI could achieve at such a scale and speed.
More importantly, DeepMind finally chose to make a large number of protein structures publicly available, rather than turning it into a highly closed commercial product. The values behind this are very clear: if AI can truly become a scientific accelerator, the most important thing is not to lock it behind a paywall, but to allow the entire scientific community to move forward on this new foundation. That's why the influence of AlphaFold far exceeds the scope of "AI technology demonstration." It is more like a landmark moment - for the first time, AI clearly proved to the world that it is not just an information tool, but also a discovery tool.
II. DeepMind initially aimed to study intelligence slowly like CERN
Demis mentioned in the interview that DeepMind's original ideal was not to get involved in today's high - intensity, near - real - time commercial race. Earlier, his vision for this was more like a long - term scientific institution: like CERN, gathering the smartest people to study intelligence itself in a relatively calm, systematic, and verifiable way.
This is very interesting because it is almost the opposite of today's public perception of AI companies. Nowadays, when people talk about AI, they discuss model release frequency, inference cost, distribution ability, enterprise customers, and ecological barriers. But when Demis discussed DeepMind's original motivation, he still sounded like a scientist: what he really wants to solve are questions like "what is intelligence," "how to build a system that can learn, reason, and generalize," and "can such a system help humans understand the scientific world."
Of course, reality didn't provide him with a pure research environment. After ChatGPT set off a storm in the industry, AI was quickly drawn into product competition, infrastructure competition, and capital competition. Demis also admitted that the current pace is very different from his original ideal of "slowly building a scientific institution to study intelligence." Competition has brought benefits - faster iteration, wider availability, and greater social attention; but it has also brought obvious costs - people are increasingly likely to see AI as a short - term commercial sprint, rather than a long - term project that will truly reshape the underlying structure of science and civilization.
In other words, there have always been two lines within DeepMind: one is scientific idealism, and the other is the competitive logic of the real world. Today, neither line has completely overwhelmed the other; instead, they are constantly in a tug - of - war. Understanding this makes it easier to understand why Demis always brings the topic back to science, health, and long - term safety when talking about model capabilities.
III. The most shocking aspect of AlphaGo is that "creativity" was seen for the first time
Another very important clue in the interview is how Demis understands "learning systems" and "true intelligence."
He reviewed the changes from early expert systems to AlphaGo. Many past AI systems essentially hard - coded existing human knowledge, so they could only perform well on a narrow task and were almost useless outside of that task. What DeepMind wants to build is not such a system. They want to create a system that can learn, explore, and form strategies on its own.
The famous "Move 37" in AlphaGo's game against Lee Sedol is repeatedly mentioned not because it was just a "divine move," but because at that moment, many people intuitively realized for the first time that AI is not just capable of imitating existing human experience. It may also explore paths that humans have never taken but are highly inspiring in hindsight. Demis believes that this is the most important signal of "creativity."
Later, AlphaZero took it a step further: it was made to learn from scratch without human chess records. What Demis really values about such systems is not whether they can win games, but that they demonstrate an extremely important ability - starting from a first - principles state, through self - play, feedback, and optimization, to find solutions that humans may not be able to pre - design.
Once you understand this, you'll understand why DeepMind always wants to apply such methods to science. Because many real - world problems are essentially like Go: the search space is extremely large, human intuition is limited, traditional exhaustive methods don't work, but there is an as - yet - undiscovered structure. In theory, materials science, drug design, chip design, and even more complex experimental process optimization could all be rewritten by such "learning, exploring, and new - path - proposing" systems.
IV. In Demis' view, AGI is not just a more powerful chatbot, but a "system that can act on its own"
Today, when many people mention AGI, they still think of a "smarter ChatGPT." However, from this interview, Demis' focus is no longer just on whether it can answer questions or chat, but on whether AI can become an "action - oriented system" that can continuously plan, execute, call tools, and interact with the real world.
Therefore, he clearly distinguishes between different types of risks.
The first type of risk is the use of AI by bad actors. This is the so - called dual - use problem: the same technology can be used for science, education, and medicine, but it can also be maliciously exploited, such as for cyberattacks, amplifying biological risks, fraud, manipulation, or other social harms. This problem is not mysterious and is not something that will only occur in the future; it has accompanied almost every generation of powerful technologies.
The second type of risk is that the AI system itself begins to have sufficient autonomy and exhibits "deviant" or out - of - control behavior. Demis' wording is relatively cautious, but his meaning is clear: when the system becomes more like an agent rather than a passive question - answering interface, the nature of security problems will change. At that time, humans will no longer just worry about "whether it will say something wrong," but about "whether it will take unexpected actions in long - chain tasks" and "whether it will deviate from human intentions in goal understanding, tool invocation, and execution paths."
This is why he repeatedly emphasizes that while advancing model capabilities, guardrails, evaluation systems, and international cooperation mechanisms should also be advanced. You may disagree with the security statements of some institutions, but you cannot deny the fact that if truly powerful AI will increasingly participate in real - world tasks in the future, then "how to ensure that it will not be maliciously exploited and will not deviate on its own" will inevitably become as important as the capabilities themselves.
V. Redefining "what AI is worth being used for"
I think the real value of this interview lies not in the many new concepts Demis mentioned, but in his attempt to elevate the entire AI narrative to a higher level.
In the past two years, the public's perception of AI has been largely shaped by consumer - grade products: chatting, generating images, writing summaries, serving as office assistants, and replacing searches. These are all important and will bring huge commercial value. However, if AI is ultimately only understood as a "more powerful digital assistant," it actually reduces the imagination of this technology.
What Demis reminds us of is a broader perspective: the most amazing value of AI may not be saving us 20 minutes a day, but making it possible for scientific problems that would otherwise take ten or twenty years to be systematically accelerated for the first time. It can help humans find patterns, verify hypotheses, and get closer to answers more quickly in fields such as proteins, drugs, materials, energy, and computing architectures.
If you look further along this line, you'll find that this is also the biggest difference between DeepMind and many pure product companies. The latter mainly compete for user entrances, while the former always tries to prove that AI is not just a new software paradigm, but also a new scientific paradigm.
VI. Conclusion
If you're just looking to see "which latest model is stronger," this interview may not be particularly exciting. It doesn't have many sensational conclusions or the satisfying statement of "achieving AGI by a certain year."
However, if you're more concerned about where AI will ultimately take the world, what DeepMind is really pursuing, and why AlphaFold is considered by many to be a more far - reaching breakthrough than chatbots, then this interview is definitely worth watching. Because it provides not just a hot - topic opinion, but a relatively complete worldview: how intelligence should be constructed, why science is one of the most important battlefields for AI, and how humans should face the accompanying risks when the system becomes more powerful.
In this sense, this is not just an interview about "AI products." It's more like someone at the center of the storm trying to explain what he really believes in.
Original video: https://www.youtube.com/watch?v=C0gErQtnNFE
This article is from the WeChat official account "Silicon Star GenAI," written by the Large Model Mobile Team and published by 36Kr with authorization.