HomeArticle

Latest interview with "Godfather of AI" Hinton: There's nothing that AI can't replicate, and humans are losing their last bit of uniqueness.

36氪的朋友们2025-07-21 16:18
Is it a countdown to the AI takeover of the world? The arrival of superintelligence is only a matter of time.

On July 21st, news broke that Geoffrey Hinton, the Turing Award winner known as the "Godfather of AI," had a recent fireside chat with Nick Frost, the co-founder of AI startup Cohere. As Hinton's first employee at the Google Brain lab in Toronto, Frost has now become a leading figure in the AI startup field.

During this conversation, the two top experts delved deeply into cutting - edge topics in the AI field, including: Do large language models truly understand human language? Can digital intelligence truly surpass biological intelligence? Which fields will become the most promising application scenarios for AI? What is the real attitude of tech giants towards regulation? Additionally, they specifically discussed the dual dangers brought by AI technology and exchanged ideas on how to establish an effective security protection system.

Here are the core views of Hinton and Frost:

1. Hinton pointed out that when large language models make mistakes in complex tasks, it doesn't mean they lack comprehension ability. Just like people with learning disabilities can still answer simple questions correctly. Reasoning ability is progressive, not black - and - white.

2. Frost compared the working mechanisms of AI and the human brain to the flight modes of airplanes and birds - different paths but similar effects. He emphasized that AI is very useful, but its "human - likeness" should not be confused.

3. Frost proposed the "spectrum theory of consciousness": From stones, trees, large models to humans, consciousness is progressive. Hinton, on the other hand, is more inclined to believe that the subjective experience of AI may be close to that of humans.

4. Current models cannot continuously learn from experience like humans. They can only statically acquire knowledge through two stages (pretraining + reinforcement learning). To update knowledge, the underlying model still needs to be retrained.

5. Both Frost and Hinton believe that the era of "language as the operating system" is coming soon. Users can use only natural language to mobilize office systems to perform complex tasks.

6. Hinton emphasized the dual risks brought by AI: In the short term, it may be used to manipulate elections and manufacture weapons; in the long term, it may "take over the world" due to surpassing human intelligence.

7. Hinton believes that large models show real "creativity" by compressing the number of connections and finding deep relationships between knowledge, even surpassing most humans.

8. Hinton thinks that most mental jobs will be replaced within 5 years; Frost believes that AI has a ceiling of capabilities, and many human tasks still cannot be completed by it.

9. Hinton said bluntly: "People like Altman don't mean what they say." AI companies seem to welcome regulation on the surface but actually avoid rules with real binding force. Currently, we can only rely on public opinion to promote policy progress.

Here is the condensed version of the interview content:

Caption: From left to right are Hinton, the host, and Frost

01 Large models making mistakes ≠ inability to think! Humans also have cognitive flaws

Question: What kind of transformative moment is the AI field currently in? What is the biggest challenge at this stage?

Hinton: It's indeed a challenge for large language models to reason better and avoid simple mistakes. However, compared with what was expected a few years ago, the reasoning ability today has improved significantly. In the past, many people believed in the classic linguistic view, thinking that this technology would never go much further. Every year, they would say that the technology could no longer progress, but every year it kept evolving.

Of course, when the model makes mistakes in reasoning, many people will say, "This proves it doesn't understand at all." But I like to use an example to illustrate: Suppose you give some simple reasoning questions to a person with a learning disability, and he can answer them correctly. But when the questions become more complex, he may not perform well. You wouldn't say, "This means he doesn't understand reasoning at all." Instead, you'd say, "He doesn't do well when facing more complex questions."

Recent studies show that the model can quickly give answers when dealing with simple questions. When the questions are a bit more difficult, it takes more time but can still answer correctly. When facing more complex questions, although it doesn't spend much time, it answers completely wrong. Then some people will say, "It can't reason at all."

In fact, its reasoning ability is fine. It just doesn't perform well when dealing with complex questions, but it can still handle simple and moderately difficult questions correctly. When facing more complex questions, it will also make mistakes like humans.

Question: So do you think these problems are essentially bottlenecks rather than inherent limitations?

Hinton: Exactly!

Frost: I think there are many bottlenecks limiting the influence of this technology at present, but not all of them are directly related to AI itself. For example, privacy issues, deployment issues, and the data it accesses. Even if this technology doesn't progress any further, it can still have a greater impact than it does now. Even if there are no technological breakthroughs in the next few years, it will continue to affect our lives. There is still a lot of work in other computer science fields to be done to make this technology have the greatest impact.

In the long run, this technology still has huge differences from humans in many aspects. It learns much more slowly than humans. When dealing with a large number of examples, its progress is very slow. Once the large language models we create are trained, they become static entities. They don't learn continuously from new experiences like humans, and we have to retrain them.

Hinton: It can do it, but you don't want it to learn all the time because you can't predict what it will learn.

Frost: No, no, actually it's a technical limitation. We train large language models in two stages: one is the training of the base model, and the other is reinforcement learning through human feedback.

The training of the base model requires reading a large amount of text data, such as scraping trillions of bytes of data from the open web. Then, through reinforcement learning, we make the model better cooperate with humans. The amount of data used in this stage is much smaller.

But if you want to add new information to the model or make it master new abilities, you need to retrain the entire base model. Therefore, although you can provide new information to it through prompts and slightly adjust its performance, if you want it to truly "learn" new information, you still need to retrain it from scratch, and this is not a task we can complete at any time currently.

02 The spectrum theory of AI consciousness: Stone → Tree → AI → Human, which level are you at?

Question: You have the same view on the working principle of large language models, but there are differences in how you understand human thinking. Professor Hinton, your view is that what neural networks do is similar to human thinking, while Frost has a different view. What exactly is the difference between you?

Frost: I think large language models are very useful and will become even better. In fact, I realized this a few years ago and founded a large language model company because I believed this technology would bring profound changes. The ability to predict the next word based on the human knowledge base is undoubtedly very useful. But I think, fundamentally, what large language models do is completely different from the human thinking mechanism.

I like to use the analogy of "flight" to explain: We've been able to help humans fly. Airplanes can fly and are very useful, but the flight mechanism of airplanes is completely different from that of birds. Airplanes generate lift through propulsion and wing design, while birds generate lift by flapping their wings. So, although airplanes can fly, their flight principle is completely different from the flight mechanism of birds.

Similarly, I think the operating mechanism of artificial intelligence is also very different from the human thinking mode. Although artificial intelligence is very useful, there is still a big gap between their working principles and the human thinking mechanism.

Hinton: Since their mechanisms are different, do you think it means that AI doesn't understand what it's saying at all?

Frost: That's another question. Although the working principle of large language models is different from human thinking, they have similarities at some levels. Consciousness should not be regarded as a simple binary concept but as a spectrum. The "consciousness" of stones is at the far end of this spectrum. The consciousness of trees is a bit closer to that of humans, and large language models are in the middle of the spectrum.

Hinton: Don't you agree with my view that they have the same consciousness? For example, when you put a prism in front of the lens, causing a pointing deviation, don't you think it will have some kind of wrong subjective experience?

Frost: I think it's wrong to regard subjective experience as a binary concept. We should recognize that consciousness is a spectrum. Stones have almost no consciousness, trees have a certain degree of consciousness, and large language models are in the middle of this spectrum. What's your view on this spectrum?

Hinton: I agree that the "consciousness" of large language models may be closer to that of humans.

Question: Why do you think humans and AI systems are essentially doing the same thing? What's your basis for this judgment?

Frost: I think this will directly affect the application direction of the technology. We are rapidly approaching a new future: the office scenario will change completely - you can just use voice commands to make the computer do the work. In fact, my current work process is increasingly relying on voice interaction, and this tipping point is not far from us.

Hinton: Indeed, it's really crazy! About 20 years ago, people often said, "How great it would be if I could talk directly to the computer!" "Why can't I just tell the printer to print a document?" And now, we can talk to computers, and they seem to understand us. This in itself is amazing.

Frost: I still think this is the most wonderful technology I've ever encountered. Every time I see its performance, I'm amazed. But I think the future direction won't be that you turn on the computer and can communicate with it like talking to a person. I don't think you'll expect it to behave like a human.

If you look at how people use language models now, you'll find that they've learned that AI can be instructed to complete specific tasks, but if it makes a mistake, you have to start over. If you give it the correct file and it can access the relevant information, it can give an answer. But if it doesn't have this information, unlike humans, it can't search for the answer on its own and doesn't know where to look.

Hinton: But don't you think this boundary is gradually blurring?

Frost: Essentially, it hasn't changed. Although the performance of the model is constantly improving and the usage threshold is decreasing, the fundamental difference between large language models and humans is still very obvious.

Hinton: I'll use data to refute you! An experiment conducted by the UK's SafetyX company showed that after ordinary users had in - depth conversations with large language models, many of them would, at the end of the experiment, voluntarily say "goodbye" to the AI as if saying goodbye to an old friend. This may imply that a subtle connection has been established between humans and machines unconsciously.

Frost: I think this doesn't mean much, for two reasons. First, as early as the 1960s, before the emergence of language models, experiments had shown that simple heuristic rules could trigger human emotional projection, and similar phenomena already existed at that time.

Second, when I was a child, I also named my dolls and established a deep emotional connection with them. Obviously, they didn't understand anything.

Personally, I also use polite language when using language models because it makes me feel better. But when users calm down, they'll realize that the "experience" of the model is fundamentally different from human perception.

Hinton: But I think more and more people are starting to treat these large language models as if they were living beings.

Question: What impact do you think our interaction with these systems will have? Because sometimes, we may trust these systems too much. For example, the latest paper from the MIT Media Lab points out that this kind of interaction may weaken human critical thinking.

Hinton: I'm not sure. But I've noticed that I'm increasingly inclined to trust GPT - 4, often believing its output without thinking and often not verifying it. I may need to change this habit.

Frost: Whenever a new technology emerges, the education community in history always shouts, "This will make children dumber." The ancient Greeks were worried that the popularization of writing would lead to memory degradation. This kind of worry has a long - standing history. I really think that as technology develops, it will become part of our daily lives, which will inevitably affect the way we think and the way we use technology. But in the long run, the invention of writing, printing, and computers ultimately promoted the development of human wisdom.

04 Is it the countdown for AI to take over the world? Super - intelligence will eventually come, it's just a matter of time

Question: Maybe we'll become better at learning how to interact with AI systems, or we can learn to complement each other's advantages?

Hinton: This will definitely happen. The benefits of AI assistants are huge. Each of us will have very smart AI assistants that know a lot about us, and we won't need traditional assistants anymore. But this also brings a huge negative impact - when these AIs become smarter than us, do they still need us?

Question: This involves the issues of risks and protective measures. Risks can be divided into two categories: short - term foreseeable risks and more macroscopic existential risks. What do you think are the main existential risks?

Hinton: I think the "existential risk" refers to AI taking over the whole world. And short - term risks are also deadly, including its use to interfere in elections, promote fascism, monitor humans, assist in the development of automated lethal weapons or biological weapons (such as designing new viruses), and even launch new forms of cyber - attacks.

Question: When you mention that these systems become smarter than us, what exactly do you mean by "smarter"? What can they do? Can they have subjective intentions?

Hinton: For example, if you debate any topic with these AIs, they'll always win. My definition of "smarter" is very simple: they can crush you in a debate, come up with solutions you'd never think of, and quickly understand new things. Most AI researchers think this will eventually happen. Frost may be one of the few who hold a different view. Do you think this is just a theoretical risk?

Frost: I think there are a few questions that need to be clarified. First, will the existing technology (or the foreseeable technological development path) bring such risks? Second, will humans eventually create machines comparable to human intelligence in history and lead to similar risks?

Professor Hinton and I both agree on the second point - there will definitely be "artificial minds" that surpass the human brain in the future, but its development won't be that fast and will expand in a predictable way. Current large language models, even if their scale continues to expand, can't reach the level of human intelligence, just like no matter how fast an airplane is built, it can never become a hummingbird.

Hinton: What about multimodal models? If these models can act in the real world and have perception abilities, do you think we can achieve the goal in these ways? I agree with you. Just relying on language models, although amazing progress has been made, maybe we can't reach the level of human intelligence. But once we let them act in the world and create their own sub - goals, I think we'll eventually be able to do it.

Frost: Existing multimodal models are essentially still sequence prediction models. It's just that their input has expanded from text to images or audio, and then they continue to predict more sequences. So they're still the same technology, just with different operating methods.

I think even if the scale of the models continues to increase, they still can't break through the fundamental limitations. But it doesn't mean the technology isn't threatening. I agree with Professor Hinton's concern about false information. I'm even more worried that this technology exacerbates income inequality. This technology may exacerbate the existing economic trends. We must ensure that it empowers the public rather than being controlled by a few monopoly classes. As for the threat of biological weapons, I think large language models don't pose a substantial risk.

Hinton: Why do you think so? Researchers in the relevant field don't think so.

Frost: Our team is also doing relevant research and has published a paper.

Hinton: Have you tried to let these