Nature's Blockbuster: The AGI Predicted by Turing Has Already Been Achieved, but Humans Dare Not Admit It
Nature's Groundbreaking Review! UCSD Research Team Claims: AGI Has Already Arrived, and Large Language Models like GPT Have Demonstrated Broad Human-Level Intelligence.
Have humans achieved Artificial General Intelligence (AGI) without even realizing it?
Yes, that's right.
A new review article published in the journal Nature makes this claim.
This is a groundbreaking revelation that shakes the foundations of the scientific community and society. Artificial General Intelligence (AGI) is not an unattainable dream; it has already arrived and is staring at us through the screens of the AI tools we use daily.
Even if you don't agree with their views, it's worth a read: Listening to different opinions helps us see clearly. Only by keeping an open mind, neither fearful nor blindly enthusiastic, can we better embrace the future.
This Is AGI
Some people say that creating human-like intelligence is like "climbing a tree to reach the moon."
But now it seems that the tree is tall enough, and the moon isn't that far after all.
Four experts from the University of California, San Diego (UCSD), including philosopher Eddy Keming Chen, AI professor Mikhail Belkin, linguist Leon Bergen, and data science professor David Danks.
Associate professor of philosophy Eddy Keming Chen, professor of "AI, Data Science, and Computer Science" Mikhail Belkin, associate professor of linguistics and computer science Leon Bergen, and professor of "Data Science, Philosophy, and Policy" David Danks.
This article is neither science fiction nor a prediction from tech giants. It presents the most systematic argument: AI is not only smart but has truly become "general."
This is the third cognitive revolution that challenges the human-centered view since Copernicus and Darwin.
Forget about the hype and horror stories.
The research team points out that the arrival of AGI is well-documented and irrefutable.
Large language models like Grok are not just imitating humans; they are surpassing humans in ways that would even amaze Turing himself.
Recall that in 1950, Turing conceived the famous "imitation game," now known as the Turing Test, to determine whether a machine can deceive humans into thinking it is human.
Fast forward to March 2025, GPT-4.5 not only passed the test but outperformed humans, achieving an overwhelming success rate of 73% in being mistaken for human.
But this is just the appetizer.
These "AI giants" are not only having endless conversations with millions of people worldwide; at the same time, they are
Winning gold and silver medals in the International Mathematical Olympiad,
Collaborating with mathematical geniuses to prove theorems,
Formulating scientific hypotheses that can be verified in the laboratory,
Easily passing doctoral-level exams,
Writing error-free code for professional programmers,
And even creating poems comparable to those of great poets.
These abilities cover multiple domains such as mathematics, language, science, and creativity. They demonstrate "breadth + sufficient depth" of general intelligence, which aligns with the definition of "general intelligence" at the average human level, rather than requiring perfection or omnipotence.
However, in a survey in March 2025, 76% of top AI researchers said that current methods are "unlikely" or "highly unlikely" to achieve Artificial General Intelligence (AGI).
This is astonishing: How can machines that can pass the Turing Test and solve Olympiad math problems not have general intelligence?
The Evidence Is Overwhelming, AGI Doesn't Need to Be Perfect
So, why is there collective denial?
The reason may be attributed to a "toxic cocktail" of vague definitions, primal fears, and huge commercial interests.
Four researchers from the fields of philosophy, machine learning, linguistics, and cognitive science believe that this disconnect lies in:
• Some are conceptual issues (vague definitions)
• Some stem from emotions (fear of being replaced)
• Some are due to commercial factors (commercial interests distort the assessment)
Their controversial conclusion is that by any reasonable standard, AGI already exists.
They say that the concept of AGI is entangled in vague definitions: Does it refer to a flawless super-brain or just someone with a wide range of abilities like an ordinary person?
Spoiler: The answer is the latter.
No one is omniscient. Einstein couldn't chat in Chinese, and Marie Curie couldn't solve number theory problems.
General intelligence means having breadth in multiple domains such as mathematics, language, science, and creativity, and having sufficient depth to complete tasks, rather than pursuing perfection.
The research team dissected the myths that hinder our understanding one by one:
AGI doesn't need to be perfect, and neither do humans;
It doesn't need to be omnipotent or cover all imaginable skills;
It doesn't need to be human-like. Alien intelligence doesn't require a human biological basis, let alone silicon-based intelligence.
AGI is not a superintelligence that dominates all fields.
No one can meet this standard. You can't, Einstein couldn't, and Leonardo da Vinci couldn't, and no one will in the future.
However, we've always required AI to meet this standard before we're willing to call it "general intelligence."
Turing's Vision Has Already Been Realized
The paper proposes three levels of intelligence:
Turing Test level: Basic education, basic conversation, simple reasoning
Expert level: Performance in international competitions, doctoral-level problems, proficiency across domains
Transhuman level: Revolutionary discoveries, continuous surpassing of all experts
Current LLMs are firmly at level 2.
The evidence is piling up like an avalanche.
Scroll up and down to view
There's also a seemingly wild benchmark: The breadth of capabilities demonstrated by current LLMs has exceeded that of HAL 9000 in 2001: A Space Odyssey.
HAL is a HAL 9000 computer with a human personality. In addition to maintaining all the systems on the Discovery spacecraft, HAL can perform many functions, such as speech, speech recognition, facial recognition, lip reading, interpreting emotions, expressing emotions, and playing chess.
HAL was once a typical representative of the terrifying super-AI in science fiction.
The real AI in 2025 has a wider range of capabilities than the AI imagined in 1968 for the year 2001.
We're even quietly moving towards feats at the "transhuman level," such as making revolutionary discoveries that no single person could achieve.
Think about it.
The doubts about AGI are like goalposts that keep moving back:
"They're just lookup tables" → Solved novel problems
"They're just pattern matchers" → Proved new theorems
"They can't do math" → Won IMO gold medals
"They don't understand" → Assisted in cutting-edge research
Notice their "tricks"? The reasons for opposition keep changing but never disappear.
This echoes the objection of Ada Lovelace, the world's first programmer and a British mathematician in 1843: Machines "can never do anything new and can only follow instructions."
In 1950, Turing responded to this.
183 years later, we're still making the same arguments, just with different words.
Are Humans Smarter Parrots? Refuting the Top Ten Objections to AGI
The paper systematically responds to the top ten objections:
LLMs are just random parrots, lack a world model, are limited to text, have no body, lack subjective initiative, have no self-awareness, have low learning efficiency, produce hallucinations, lack economic benefits, and have an alien form of intelligence.
Critics shout, "Large models are just random parrots that repeat data!"
But when AI can solve brand-new math problems, infer statistical laws from new data, or design real-world experiments, such excuses fall apart.
Do they lack a cognitive model of the world? Ask the AI that can predict, like a prophecy, that a falling cup will shatter.
Are they limited to text? Multimodal training and laboratory collaborations prove otherwise.
"AI has no body, so it can't have intelligence."
Physicist Stephen Hawking interacted with the world almost entirely through text and synthetic speech. Would you deny his intelligence because of that?
This is irrelevant. Intelligence is related to cognition, not movement.