HomeArticle

The six major AI players on stage: AGI is no longer a thing of the "future".

AI深度研究员2025-11-08 09:41
On November 7, 2025, a roundtable discussion for the 2025 Queen Elizabeth Prize for Engineering laureates was held, featuring Jensen Huang, Yoshua Bengio, Geoffrey Hinton, Fei-Fei Li, Yann LeCun, and Bill Dally.

On November 7, 2025, after the awards ceremony of the Queen Elizabeth Engineering Prize in London, a round - table dialogue is rewriting people's perception of the future of AI.

The six participants are not ordinary industry representatives, but key figures in this wave of AI revolution:

Geoffrey Hinton, Yoshua Bengio, Yann LeCun: the three founders of deep learning;

Fei - Fei Li: the initiator of ImageNet and the pioneer of spatial intelligence;

Bill Dally: the chief designer of GPU computing architecture;

Jensen Huang: the biggest promoter of AI industrialization.

This is an extremely rare collective dialogue.

There is only one core topic: Has Artificial General Intelligence (AGI) really arrived?

No one gives a standard definition, and no one declares that the technology is complete. But in the next 30 minutes, the six pioneers conveyed the same signal from their respective perspectives: AGI is no longer a distant goal; it has begun to play a role in reality.

Hinton said: In 20 years, machines will defeat everyone in a debate.

Jensen Huang said: We are already using AGI - level intelligence for practical work today.

Fei - Fei Li said: In some fields, machines have surpassed humans, but the direction of their evolution may not be anthropomorphic.

LeCun bluntly stated: The current large - scale models do not equal real intelligence. We don't even have a machine as smart as a cat.

Behind the differences, there is a consensus: the paradigm is shifting.

Section 1 | Forty Years of Accumulation: How Did AGI Appear?

When the host asked: What was that epiphany moment in your life? That moment that led you onto the path of AI, the six people took turns to tell their respective starting points.

These stories span several decades but piece together a clear timeline: Today's AGI did not emerge suddenly but is the result of a step - by - step evolution over forty years.

Yoshua Bengio said that he first developed a strong interest in AI when he read Geoffrey Hinton's early papers during his postgraduate studies. At that time, he suddenly realized that perhaps there was a simple set of principles behind human intelligence, just like physical laws.

It was this discovery that made him decide to devote himself to neural network research.

Decades later, when ChatGPT was launched, he was shocked again: My God, what are we doing? We've created machines that can understand language and have goals. But what if their goals don't align with those of humans?

So he completely shifted his focus to AI safety and ethics research, changing from understanding intelligence to constraining it.

Geoffrey Hinton's memories go even further back.

In 1984, I tried to get a small model to predict the next word in a sentence, he said. It could learn the relationships between words on its own. That was a miniature language model. At that time, he only had 100 training samples, but he saw the prototype of the future: as long as the model could predict the next word, it could start to understand the world.

That was the prototype of all subsequent large - scale language models, but at that time, there was neither sufficient computing power nor enough data.

He paused for a moment and added: It took us 40 years to achieve today's results.

Bill Dally experienced two key epiphanies.

The first was at Stanford in the late 1990s. He was thinking about how to solve the memory wall problem, that is, the dilemma where the energy consumption and time cost of accessing memory are much higher than the actual computation itself. He came up with the idea of organizing computation into kernels connected by data streams, which could complete more operations with fewer memory accesses.

This idea later evolved into stream processing and eventually into GPU computing.

The second was at a breakfast in 2010. I had breakfast with Andrew Ng at Stanford, he said. They were training neural networks with 16,000 CPUs at Google, teaching machines to find cats on the Internet. That's when Dally realized that this was no longer just a laboratory concept but a scalable computing model. He returned to NVIDIA and, together with Brian Catanzaro, replicated the experiment with 48 GPUs. The results completely convinced him: GPUs are the real engines of deep learning.

We must design GPUs specifically for deep learning, he said. That breakfast changed the direction of NVIDIA and the rest of my life.

Fei - Fei Li's epiphany came from another dimension: data.

Human intelligence is inundated with a vast amount of sensory data in its early development, but machines are not.

Between 2006 and 2007, as she was transitioning from a graduate student to a young professor, she tried various algorithms such as Bayesian, support vector machines, and neural networks, but the machines still couldn't generalize and recognize new samples.

She and her students finally realized: What was lacking was not the algorithm but the data.

So they decided to do something that seemed crazy at the time: manually label 15 million images in three years to create ImageNet, covering 22,000 categories. That dataset later became the cornerstone of the AI vision revolution and gave machines the ability to understand the world for the first time.

She said: Big data drives machine learning. This is the foundation of all AI Scaling Laws today.

Yann LeCun was one of the earliest pioneers.

I was attracted by an idea when I was an undergraduate, which was to let machines learn on their own instead of teaching them what to do. In 1983, when he was a graduate student, he read Hinton's paper for the first time; two years later, they met at a lunch and found they could communicate well.

I think I'm too lazy or too stupid to write the rules of intelligence by hand. Letting machines self - organize and learn is the way of life.

Interestingly, he and Hinton argued in the late 1980s about whether supervised learning or unsupervised learning was the way forward. Later, the success of ImageNet made the entire field temporarily shift towards supervised learning. But in 2016 and 2017, they realized they had to return to self - supervised learning, which is the current training method for large - scale language models.

Forty years later, he still insists: The core of intelligence is self - organization, not instructions.

Finally, it was Jensen Huang's turn.

For me, the most important moment was realizing that the underlying logic of designing chips and building deep - learning systems is the same.

He explained that he was one of the first engineers who could design chips using high - level abstractions and structured tools. Around 2010, when he saw that deep learning was also using frameworks and structured methods to develop software, he suddenly realized: This is so similar to the way of thinking in chip design.

Maybe we can expand software capabilities in the same way as we expand chip design.

Later, when research teams from Toronto, New York, and Stanford almost simultaneously contacted NVIDIA for computing power support, he understood: AI is moving from theory to engineering. Once an algorithm can run in parallel on one GPU, it can run on multiple GPUs, then on multiple systems, and multiple data centers. The rest is just engineering extrapolation.

The six stories together form a map of AI's evolution over forty years.

Hinton sowed the seeds of the algorithm, Bengio turned it into a scientific problem, LeCun taught it to self - organize, Fei - Fei Li enabled it to see the world, Bill Dally made it run faster, and Jensen Huang turned it into an industrial engine.

Of course, their work is far more complex than this, intertwined and inspiring each other. But these six people have indeed jointly shaped the foundation of today's AI.

Today's AGI is not a suddenly - born genius but a historical process jointly shaped by these people over forty years.

Section 2 | The Timeline Splits: Some Say It's Here, Some Say It Never Will Be

Forty years ago, they each embarked on the path of AI. Forty years later, standing in front of the same finish line, they see different futures.

When the host asked the question: How long until we reach human - level intelligence?

This is a question that everyone can't avoid and on which there has never been a consensus.

In the next few minutes, the six people gave six completely different answers. They were talking about real - meaning intelligent machines, systems that can understand, think, and act, rather than just the progress of models or the release speed.

LeCun was the first to speak, directly denying the premise of the question.

This won't be an event. Because capabilities will gradually expand in various fields.

Maybe in the next five to ten years, we will make some significant progress in proposing new paradigms. Then progress will come, but it will take longer than we think.

His meaning is clear: Don't wait for a singularity moment. AGI is a gradual process, not a sudden change.

Fei - Fei Li offered another perspective: The question shouldn't be whether AI will surpass humans, but in which aspects it has already done so.

Some parts of machines will surpass human intelligence, and some are already here. How many of us can recognize 22,000 objects? Translate 100 languages?

She then said: Just as airplanes fly higher than birds but in a completely different way. Machine - based intelligence will do many powerful things, but human intelligence will always play a key role in our human society.

What she means is: Surpassing has already happened, but it's not replication or replacement.

Jensen Huang was completely different. He didn't mention a year and instead denied the question itself on the spot.

We have enough general intelligence to transform the technology into a large number of socially useful applications in the next few years. We are doing this today.

I think it's not important because at this point, it's a bit of an academic question. From now on, we will apply this technology, and the technology will continue to improve.

What he gave is not a prediction but a real - time progress bar: It's not that it will be useful in the future; it's already being used now.

Hinton's answer was more specific: If you have a debate with a machine, it will always beat you. I think this will happen within 20 years.

He said this calmly, but the statement was full of implications. This is not just a prediction but a confirmation: We are on that path; it's just a matter of speed.

Bill Dally reminded everyone: Maybe the question itself is wrong.

Our goal is not to build AI to replace humans or be better than humans, but to build AI to enhance humans. What we want to do is complement what humans are good at.

AI does what it's good at, and humans retain creativity, empathy, and collaboration. We complement each other, not replace each other. In his view, the concept of reaching human - level intelligence itself is misleading.

Yoshua Bengio spoke last and put forward the most controversial view.

I have to disagree on this point. I don't see any reason why we can't build machines that can do almost everything we can do at some point.

The data he provided is: The planning ability of AI has been growing exponentially in the past six years. If this trend continues, AI will reach the level of an engineer within five years. More importantly, many companies are using AI for AI research and designing the next - generation AI systems. This may lead to many other breakthroughs.

But he finally emphasized: I'm not saying it will definitely happen. We should be agnostic and not make grand statements because there are many possible futures.

Six answers, six senses of time.

LeCun said it's a gradual evolution but will take longer than expected; Fei - Fei Li said some capabilities have already surpassed; Jensen Huang said it's already being used now; Hinton predicted it will happen in 20 years; Bill Dally questioned the question itself; Bengio said it will reach the engineer level in five years, but it's full of uncertainties.

What we see is not a clear path but an increasingly fragmented perception of time.

The judgment of the future essentially reflects the differences in their understanding of intelligence itself.

Section 3 | From Language to Action: The Next Step of Intelligence

While debating the future, they are more concerned about the ongoing transformation.

In the past few years, the progress of AI has been concentrated on language ability. Large - scale models such as ChatGPT and DeepSeek are helping global users answer questions, write summaries, and provide solutions.

But in this dialogue, several top researchers unanimously pointed out: In the next stage, AI needs to move from language to action.

✅ Fei - Fei Li was the first to point out the direction.

Human intelligence has never relied solely on language. Our brains are naturally designed to process space, perceive, reason, move, and act. These are the areas where AI is still very weak today.

She pointed out that if the most powerful current language models are used for spatial judgment tasks, the results are poor. This has also been the focus of her research in the past few years: spatial intelligence.

We've been too focused on machines that can talk, but we've ignored that the world is three - dimensional and requires physical presence, orientation, and hands - on ability.

✅ LeCun's attitude was more sober.

He repeatedly emphasized a judgment throughout the dialogue: The current paradigm of large - scale language models is still far from real intelligence.

Personally, I don't believe that the current paradigm of large - scale language models can be extended to reach human - level intelligence. We don't have a robot as smart as a cat. We're still missing some major things. In his view, the progress of AI is not just a matter of more infrastructure, more data, and more investment, but a scientific question: How can we make progress towards the next - generation AI?

The direction he has always advocated is: Let machines learn from the environment on their own, rather than having humans feed them the answers. Just like a baby, who learns through observation and trial - and - error rather than relying on prompts.

We can't feed a child hundreds of millions of dialogue datasets, but the child can still learn language because he actively learns in the environment.

This is what he calls self - supervised learning, and he believes it's the key direction to break through the current bottleneck.

✅ Jensen Huang brought the problem back to the real - world scenario.

Today, AI is not just a dialogue tool but is starting to take over work. It's not a smarter search engine but a partner that can complete tasks.

We've already seen AI writing code, diagnosing diseases, and doing finance. It's not just talking; it's helping you do the work.

To describe this change, he proposed a new metaphor: In the past, we called software tools, but now AI is a factory. It generates intelligence in real - time, just like a power plant generates electricity in real - time. We need hundreds of billions of dollars' worth of these factories to serve industries worth trillions of dollars built on intelligence.

This means: We can no longer regard AI as a program that can answer questions, but as a production system that works continuously and outputs constantly.

The change we're seeing is: AI is shifting from being good at talking to being able to do things.

It's moving from the chat window into real - world processes; from passive reaction to active execution. This is not just a functional enhancement but a paradigm shift.

This is also why, when they talk about AGI, they are no longer arguing about the size of the parameters but discussing:

How should AI work with humans?

Where should it be placed?

What should be its ability boundaries?

Conclusion | It's Not About When, It's Happening Now

In this dialogue, no one gave a standard definition of AGI, and no one declared its official birth. But almost everyone described its way of existence.

Jensen Huang said: The AI factory has already started operating.

Hinton said: In 20 years, it will win all debates.

Fei - Fei Li reminded us: We've been too focused on what it says and ignored what it does.

AGI is not a product that suddenly goes live one day but a reality that is seeping into every organization, every process, and every position.

At the end of the dialogue, the host said: If we have such a dialogue again in a year,