A stunning prediction by a Nobel laureate: General relativity, which is AGI, will be introduced in 4 years, and it will complete 58 billion years' worth of human tasks.
Just now, Demis Hassabis, the winner of the Nobel Prize and the head of Google DeepMind, redefined AGI.
In February 2026, at the AI Summit in India, Hassabis gave an extremely tough definition of AGI —
“The Einstein Test”: Limit the AI's knowledge base to before 1911 and see if it can independently derive the general theory of relativity in 1915, just like Einstein did.
Can it do it? Congratulations, you've got an AGI.
Can't do it? Then you're still just looking at an advanced search engine.
Hassabis said this because in 1911, Einstein began to seriously think about the problems of gravity and acceleration (proposing a deeper version of the equivalence principle). In November 1915, he officially published the field equations of the general theory of relativity.
It took about four years from the systematic conception to the formation of the complete theory.
Obviously, Hassabis's test isn't about testing the AI's knowledge volume, but its ability to make original scientific discoveries — whether it can take that “out of nothing” leap beyond the boundaries of known information.
Netizens have all said that this is the first truly meaningful definition of AGI.
Hassabis added another blow:
All current AI systems, including his own company's Gemini, are “jagged intelligence” — super strong in some aspects but a mess in others.
It's still at least one or two key breakthroughs away from real AGI.
Some people even imagine that if this AI is powerful enough, it might discover not just the general theory of relativity, but more advanced theories.
Elon Musk quickly retorted angrily: What you defined isn't AGI, it's superintelligence
As soon as the news came out, Elon Musk responded almost instantly.
His view is: “What Hassabis defined isn't AGI, but superintelligence.”
Musk's logic is very clear —
Einstein is one of the top physicists in human history. Even the entire human race combined doesn't have the ability to independently reproduce the theory of relativity (after all, it was Einstein who did it alone). If an AI can do it, and this AI can also be infinitely replicated and run in parallel at a million - unit scale...
Then this thing is no longer at the “human level”. It's a being that crushes the entire human race.
Musk made it very clear: You're taking the threshold of superintelligence as the passing line for AGI. This is a misalignment of standards.
Regardless of who is more reasonable in the dispute between Hassabis and Musk, it's undeniable that the industry giants unanimously predict that AGI is getting very close to us!
Note that Demis Hassabis has recently shortened his timeline for AGI. Previously, he was more conservative (5 to 10 years).
But now, his exact words are: “Now, in 2026, we're at another critical point where AGI is about to arrive — perhaps within the next five years.”
OpenAI CEO Sam Altman predicts that AGI will be achieved in 2028. “If you're a sophomore now, you'll graduate into a world with AGI.”
At a recent event, Altman also said: “Just looking at the internal acceleration of our existing technology, I think we're already very close. Given the faster take - off I'm expecting now, I think superintelligence isn't far away.”
He emphasized again: We can expect OpenAI to achieve AGI/ASI by the end of 2028.
Even Francois Chollet, an “AGI skeptic”, believes that AGI can be achieved in 2030, which is only four years away.
Microsoft CEO Suleyman directly predicts that it's only 12 to 18 months until “most or even all” white - collar jobs are replaced by AI.
So, how should AGI be defined?
Back to the origin: Who exactly invented the term AGI?
Actually, the concept of “Artificial General Intelligence” (AGI) has a much shorter history than many people think.
In 1950, Turing proposed the famous “Turing Test”.
If a machine can make humans unable to tell whether it's a human or a machine during a conversation, it's considered intelligent.
This was the earliest benchmark in the field of AI.
But the Turing Test has been widely criticized by later researchers — it only tests “imitation ability” and doesn't test real understanding and creation.
In 1956, the Dartmouth Conference officially launched the discipline of AI.
The pioneers at that time — McCarthy, Minsky, Simon — arrogantly predicted that machines would be able to do anything that human intelligence could do within 20 years.
What was the result? Two “AI winters”.
In 1997, an American scholar named Mark Gubrud first used the term “Artificial General Intelligence” in an academic discussion. He was discussing the future of fully automated military systems at that time.
In 2007, Ben Goertzel, on the advice of Shane Legg, published a book called “Artificial General Intelligence”, which completely pushed this concept into the mainstream.
Since then, AGI has become the most important conceptual anchor in the field of AI — it draws a line: on one side is “narrow AI that can do specific tasks”, and on the other side is “general AI that thinks comprehensively like a human”.
The concept of “superintelligence” was systematically defined by Oxford philosopher Nick Bostrom in his 2014 book of the same name:
An intelligent entity that far exceeds the most powerful human brains in almost all cognitive fields that humans care about.
Bostrom also subdivided three types of superintelligence —
- Speed - type: As smart as a human, but 100,000 times faster.
- Quality - type: Not only fast, but also crushes humans in terms of the depth of thinking.
- Collective - type: One million AIs work together, and their collective wisdom far exceeds the sum of human civilization.
This is exactly where Musk's logic in refuting Hassabis lies: If an AI can independently derive the theory of relativity and can also be replicated on a million - unit scale — isn't that the “collective superintelligence” defined by Bostrom?
So, how far are we really from AGI?
In 15 years, AI can complete tasks that would take humans 58 billion years!
Obviously, each big shot's definition of AGI isn't the same thing at all.
Musk's standard is the lowest: If an AI can take exams, drive, and program, it's considered AGI. In essence, it's an “all - around AI assistant”.
Sam Altman came up with a five - level framework: from Level 1 (ChatGPT that can chat) to Level 5 (AI that can independently run a company). Currently, he thinks we're between Level 2 and Level 3 — AI can already do basic reasoning and is moving towards autonomous action.
Yann LeCun is the most pessimistic (or perhaps the most rigorous): He believes that the current LLM architecture has fundamental flaws — it doesn't understand causality, has no physical intuition, and can't continuously learn from real - world experiences. To achieve AGI, a brand - new “world model” paradigm is needed, and the current route won't work.
And Hassabis's “Einstein Test” exactly hits the most core problem: The essence of all current large models is pattern matching and information recombination.