HomeArticle

Ultraman unexpectedly changes his tune: AGI is useless? MIT predicts it will arrive in 2028 with a 50% probability.

新智元2025-08-14 15:07
The prediction of AGI has been advanced to within five years. AI has obvious shortcomings, and the demand for computing power has surged.

[Introduction] We are getting closer to AGI - at least it seems that way. The timeline has been compressed from 50 years to 5 years, and some industry leaders even predict it will arrive in 2026 or 2028. However, at the same time, AI scored 0% in the ARC test and still acts like a novice in basic human abilities. Are we too hasty to think it's ready?

Computing power is expanding, models are stacking up, and prompts are being repeatedly input like fuel.

AI's progress has not slowed down; instead, it's accelerating.

Some people predict that AGI is still a long way off, at least half a century away.

But now, some key milestones have been brought forward.

This path, once considered too long, is advancing much faster than anyone expected.

Altman's latest view: The term AGI doesn't make much sense anymore

From a Decade to Five Years: AGI Predictions Greatly Advanced

As pointed out in a recent article titled "The road to artificial general intelligence" published by MIT Technology Review Insights.

Our predictions about AGI are undergoing a visible acceleration.

From the prediction of "it will take 50 years to achieve" when GPT - 3 was released to the current view of "a prototype can be seen within 5 years", the timeline has been advanced by decades.

Predictions about the future of AGI

Dario Amodei, the co - founder of Anthropic, proposed a more practical and updated term: "Powerful AI".

This is a model with Nobel - level intelligence, capable of flexibly switching across text, voice, and physical environments, and can independently set goals, perform reasoning, and execute tasks.

His prediction is that it might appear as early as 2026.

Altman believes that systems with AGI characteristics "are already showing signs", and their potential could bring about social changes comparable to those of electricity and the internet.

From a more macroscopic data perspective, the predicted timeline is also shifting forward significantly.

Multiple predictions show that by 2028, the probability of AI achieving multiple AGI milestones is at least 50%.

By 2027, the probability of machines outperforming humans in all tasks without assistance is about 10%, and it may rise to 50% by 2047.

This path, once thought to take "half a century", is now being rewritten.

Superpowers and Shortcomings Coexist: Eight Defects of AI

Today's AI is like a gifted student with excellent grades.

It can memorize, take exams, and even excel in highly difficult professional tasks.

But once out of the test environment, it seems lost.

During image recognition, it may mistake a banana for a toast; when providing navigation, it might direct you straight into a wall; when asked to catch a glass of water or cut a wire, it will most likely be in a complete mess.

These are not jokes but reality.

What AGI truly needs is not just the ability of logic and language generation, but also "default human skills".

McKinsey once summarized eight core defects of AGI in mimicking human intelligence, which cover almost every dimension of our interaction with intelligent agents:

1. Visual perception: Slow to react to color and image changes, prone to confusion, and lacking true visual consistency;

2. Audio perception: Difficult to handle the spatial location and detailed features of sounds, unable to recognize intonation and emotions;

3. Fine motor skills: Unable to perform complex fine - motor tasks, such as threading a needle or performing surgery;

4. Natural language processing: Can only understand syntax, not meaning, and often "goes off - track" when faced with context and implications;

5. Problem - solving: Can only handle well - defined problems and is almost helpless when faced with new tasks;

6. Navigation ability: Difficult to independently plan routes in a dynamic real - world environment and unable to adapt to environmental changes;

7. Creativity: Unable to pose truly new questions or optimize and rewrite its own logical structure;

8. Social and emotional understanding: Unable to read facial expressions or detect tone changes, and lacks true empathy.

Powerful but unbalanced; intelligent but slow.

This is today's AI - it stands in front of us, but there is still an invisible barrier.

Running Fast and Collaborating Well: The Computing Power War Behind AI

AGI cannot be achieved by simply stacking a larger chip. It requires a whole set of evolving computing systems, from hardware to underlying software.

From the energy structure of data centers to the resource scheduling of mobile devices, all levels need to collaborate and stimulate each other.

This war has quietly begun.

After entering the deep - learning era, the growth rate of AI's computing demand has dropped sharply from doubling every 21 months to every 5.7 months. The model size has expanded a hundredfold, and the training cost has increased exponentially.

Growth curve of AI computing demand

Some predictions even suggest that the computing power consumption of certain future AGI training tasks may exceed a country's GDP.

This is not only a tug - of - war over hardware but also a rewrite of the architecture.

To meet the needs of large - scale reasoning and real - time response, AI systems are fully shifting to heterogeneous computing paths: CPU, GPU, NPU, and TPU each play their own roles, allocating the most suitable computing power to the most appropriate tasks.

What enables these multi - chip collaborations is the software tools and frameworks hidden in the system's underlying layer.

Computing stack structure of general AI

They are responsible for managing, coordinating, and scheduling tasks, helping developers call different hardware and deploy across platforms without rewriting code, optimizing performance while reducing energy consumption.

However, it is still unrealistic to directly achieve AGI with the current computing power stack.

The MIT report points out that the real problem is not just slow computing speed but an incorrect structure.

Just as Transformer triggered the explosion of generative AI, AGI may also need an architectural revolution.

It's not about creating an even larger LLM but inventing a cognitive framework that allows the model to think, adapt, transfer skills, and optimize itself in a new environment, just like humans.

And this may be the biggest paradox at present -

We need a stronger computing system to support the formation of AGI, but we also need to completely reconstruct the foundation of intelligence to break through the ceiling of simply stacking computing power.

The Real Test of Intelligence: AI's Miserable Defeat in the ARC Test

François Chollet, the initiator of the ARC intelligence test, proposed a more demanding standard:

"True intelligence is the ability to recombine your existing knowledge to solve brand - new problems."

To verify this, he designed the ARC - AGC test.

Different from traditional tests, each question in this test represents a completely new task that has never appeared before.

It examines true human - style reasoning - abstract ability, transfer ability, and analogical ability.

The result was unexpected. The score of pure large - language models was 0%.

Even systems optimized for additional reasoning only scored in single digits.

Humans, on the other hand, can almost answer all the questions correctly.

Chollet said bluntly:

"This shows that the currently strongest AI models do not have the ability to flexibly recombine knowledge. They just have good memory but can't really think."

What this test exposes is not a lack of parameters or training but a fundamental wrong direction.

It's not that it lacks power but that it doesn't have the "thinking structure" itself.

Not Stronger, but Broader: The Final Puzzle of AGI

There has never been only one path to AGI.

However, more and more evidence is pointing to the same conclusion:

AGI may not be a breakthrough in a single technological point but the collaborative rise of a whole set of heterogeneous systems.

It needs a more flexible hardware structure - using the right chips for the right tasks, with CPU, GPU, TPU, and NPU each performing their own duties.

It needs a smarter scheduling framework to enable these heterogeneous chips to cooperate dynamically without wasting any computing power.

It needs a new architecture, like how Transformer did for GPT, to trigger a leap in the way of cognition.

It may even need to reconstruct "intelligence" itself.

Perhaps the path to AGI is not the emergence of a "stronger model" but a collective shift in technology.

As the MIT report says:

"In our pursuit of smarter machines, we may also truly understand what 'intelligence' means for the first time."

Reference:

https://wp.technologyreview.com/wp-content/uploads/2025/08/MITTR_ArmEBrief_V12_final.pdf

This article is from the WeChat official account "New Intelligence Yuan". Edited by Qingqing. Republished by 36Kr with permission.