The six giants of AI made a rare appearance on the same stage. Fei-Fei Li engaged in a heated debate with Yann LeCun. Jensen Huang said, "You're all wrong."
The AI revolution is real, but even those who are trying to chart the course to its end have no clear map. Recently, six top figures in the field of AI, including Yann LeCun, Fei-Fei Li, Jensen Huang, Geoffrey Hinton, Bill Dally, and Yoshua Bengio, gathered together after jointly winning the Queen Elizabeth Prize for Engineering and engaged in a high - level dialogue about artificial intelligence.
A moment when the stars of humanity shine brightly!
When these six people get together and have a discussion, you know it's not going to be a simple matter!
This interview is extremely valuable as it managed to bring these six AI titans together.
This week, Jensen Huang, the CEO of NVIDIA, Yann LeCun, the Chief AI Scientist at Meta, and top computer scientists Yoshua Bengio, Geoffrey Hinton, Fei - Fei Li, and Bill Dally jointly won the Queen Elizabeth Prize for Engineering this year.
In this interview, the big shots shared their moments of epiphany in their careers.
These "aha" moments not only pointed them in the direction of their research but also completely changed the course of technological advancement in human society.
Moreover, all six of them engaged in a fierce debate around a core question:
Are we humans truly in the midst of a real AI industry revolution? Or is AI a huge bubble on the verge of bursting, the biggest in history?
Forty years of waiting for a moment of "epiphany"
The host said they are the six most outstanding and influential people on the planet.
This is by no means an exaggeration.
Where did this AI revolution come from?
The answer is not a sudden flash of inspiration from a genius, but the long - term perseverance of a group of people.
The spark of thought was ignited 40 years ago.
AI godfather Geoffrey Hinton recalled that in 1984, he used an extremely rudimentary computer at that time to train a tiny model to predict the next word in a sequence.
"I found that it could actually learn the meanings of words!" he said.
This is the most primitive prototype of all today's large language models.
An idea that was lit in the dark and has traveled through 40 years of time.
Yann LeCun admitted that he was a "lazy" engineer in his youth. He didn't want to program line by line to create intelligence but was fascinated by the idea of "letting machines learn intelligence on their own".
This seemingly lazy idea is the core philosophy of machine learning.
But having an idea is not enough. A revolution needs fuel and an engine.
In 2006, Fei - Fei Li, who was a young professor at that time, found that all algorithms were stuck with a problem: there was too little data.
A child sees a vast amount of information during growth, while our machines are starving in a data drought.
So, she and her team did something that seemed extremely crazy at that time - they spent three years manually labeling 15 million images and created a dataset called ImageNet.
After this "fuel" was poured into the AI field, it instantly ignited the entire industry.
Meanwhile, at NVIDIA, Jensen Huang and his colleagues were also building an increasingly powerful "engine".
The GPUs they initially designed for games were unexpectedly found to be perfect tools for deep - learning computations.
In 2010, at a historic breakfast, Professor Andrew Ng from Stanford told NVIDIA scientist Bill Dally that he used 16,000 CPUs to identify cats on the Internet.
Bill Dally and his colleagues went back and replicated the experiment with only 48 GPUs.
At that moment, he had an epiphany: "We should manufacture specialized GPUs for deep learning."
These stories strung together form a "prequel" to the birth of AI:
The spark of thought was ignited during the AI winter. Once the fuel of data and the engine of computing power are in place, a revolution is unstoppable.
Moments of Epiphany of the Six (Highlights)
Yoshua Bengio
- After reading Hinton's early papers, he had an intuition that there might be simple principles like physical laws to explain intelligence and build intelligent machines.
- Two and a half years after the emergence of ChatGPT, he became alarmed: machines can understand language, have goals, but are difficult to control. What if they become smarter or are misused? So he turned to research on safety and countermeasures.
Bill Dally
- In the late 1990s, he had an epiphany about the "memory wall": connect cores with "streams" to do more arithmetic and less memory access. This laid the foundation for GPU computing.
- At a breakfast with Andrew Ng in 2010: Google used 16,000 CPUs to find "cats". Inspired by this, in 2011, he and his colleagues replicated the experiment with 48 GPUs.
- The result was astonishing: he was determined to make GPUs dedicated to deep learning and continuously optimize them.
Geoffrey Hinton
- In 1984, he built a small - scale language model: used back - propagation to predict the next word. The model automatically learned word - meaning features and their interactions. The idea was the same as today's LLMs, just very small with only 100 samples.
- The obstacle was the lack of computing power and data. But he didn't realize it at that time.
Jensen Huang
- Around 2010, he received early signals of deep learning from the University of Toronto, New York University, and Stanford at the same time. He found that developing software using "frameworks and structured representations" was highly analogous to chip design and could be scaled.
- Epiphany: Once the algorithm works in parallel on a single card, it can be extended to multiple cards, multiple machines, and data centers. The rest is just an engineering deduction: how large the data is, how large the network is, and what problems can be solved.
Fei - Fei Li
- From 2006 - 2009, she had an epiphany: the difficulty lies not only in algorithms but also in data. Thus, she built ImageNet: 15 million images, 22,000 categories, and crowdsourced annotation. Big data drives machine learning.
- In 2018, when she was the Chief Scientist of Google Cloud AI, she believed that AI is a "civilization - level technology" that affects all industries and individuals. She returned to Stanford to co - found HAI and proposed "human - centered AI".
Yann LeCun
- As an undergraduate, he was fascinated by the concept of intelligence through "training rather than programming". In 1985, he met Hinton and started from the trainability of multi - layer networks.
- He and Hinton had a debate: supervised vs. unsupervised/self - supervised.
- The success of ImageNet once made the whole field turn to supervised learning.
- From 2016 - 2017, he emphasized self - supervised learning again; LLMs are a good example. The next step is non - language data such as video, and self - supervised learning remains a key challenge.
In this era of frenzy, are we in a bubble?
Okay, after telling the history, let's get back to the most pressing question at present:
NVIDIA's market value has soared, and the whole world is talking about AI. Is all of this real value or just another Internet bubble?
Jensen Huang gave an excellent answer to this question.
During the Internet bubble in the early 2000s, the entire industry laid a huge amount of fiber - optic cables, but the vast majority of them were "dark fibers" that were not lit, and the demand was far behind the construction.
Today, almost every GPU you can find is being lit and put into use.
Why? Because AI has fundamentally changed the way "value" is produced.
Jensen Huang said that we are creating a brand - new industry, an intelligent factory.
In the past, software was a "tool" that you bought and used.
For the first time, AI has become "productivity" itself. It is not content but real - time generated intelligence.
You can't produce intelligence in advance and store it.
Every time you ask ChatGPT a question, it is "producing" an answer for you.
This production process requires huge computing power, just like a factory needs machines and electricity.
Therefore, we need "AI factories" (data centers) worth hundreds of billions of dollars to serve a brand - new industry worth trillions of dollars built on intelligence.
We are in the early stage of building this industry. How could it be a bubble?
In other words, this is the infrastructure - building period of a brand - new "intelligence revolution" after the agricultural revolution and the industrial revolution.
We are in the stage of laying water, electricity, and gas pipelines for the new world, and the demand has just begun.
However, Fei - Fei Li and LeCun had a "debate" on the spot.
Fei - Fei Li emphasized that AI is still a very young field. Besides language, there are still vast frontiers such as "spatial intelligence" waiting to be explored.
Yann LeCun pointed out that the bubble lies in the idea that "the current large - language - model paradigm can ultimately develop to human - level intelligence". He personally doesn't believe it and thinks that fundamental breakthroughs are needed.
In the ultimate future, how far is "human - level AI"?
This is the most exciting climax of the entire dialogue.
When asked "how far are we from intelligence comparable to humans", the six brilliant minds at the table painted six different pictures of the future.
The "Pragmatic" Jensen Huang
This question is not important, and it has already happened.
There is already enough "general intelligence" that has been transformed into a large number of useful applications.
Whether it is "human - level" is not important; the key is to continuously apply it to solve major problems.
He believes that we already have powerful enough AI to solve a large number of real - world problems.