HomeArticle

Locked Imagination: When AI Becomes the Essence of Scientific Research, Humans Can Only Be Spectators

新智元2025-12-29 17:02
We feel that AI is causing trouble now because it is still trying to imitate humans. However, once it crosses that "adaptability threshold," it will self-replicate at a speed that humans can't comprehend at all.

Our imagination of the future may be locked up.

What people are currently struggling with is that the copywriting written by AI has a mechanical flavor, and whether it can help them with work so they can leave work early.

Steve Newman poured cold water on it: Stop staring at those extra fingers! That's just the "noise" in the early stage of technological evolution.

He quoted Amara's Law - the short-term impact of technology is often overestimated, while its long-term potential is always underestimated.

We now think AI is causing trouble because it is still trying to imitate humans; but once it crosses that "adaptability threshold", it will self-clone at a speed completely beyond human comprehension.

Once this red line is crossed, we will enter an "unrecognizable era".

The Ceres Hypothesis

When Growth No Longer Needs to "Persuade" Humans

Steve Newman proposed the "Ceres Experiment" in his article, which sounds very magical:

In 2055, humans deployed a fully automated industrial system in the asteroid belt. From mining, energy to chip manufacturing, all are driven by AI.

The only task of this system is to self-replicate exponentially.

Why must the location be "outside the Earth"?

On Earth, the advancement of any technology is essentially a long and tiring social negotiation.

It is necessary to pass environmental assessments, handle employment protection, and balance political games and vested interests.

Here, technological progress is wrapped in various "structural frictions" and takes one step forward and three steps back.

But in the asteroid belt, the friction disappears. On Ceres, AI only faces material strength, energy supply and physical laws.

For the first time, growth does not need to "persuade" anyone.

What's even more chilling is the logic of growth.

In Newman's hypothesis, it will self-replicate exponentially like a biological virus. From 10,000 to 20,000, from 20,000 to 40,000... After 20 years, it will reach 10 trillion.

In the biological world, this kind of growth usually means disaster; but in a mechanical system, it represents extreme efficiency.

This is exactly the "civilization breakpoint" that Newman wants to emphasize: there is no unemployment because no humans have ever been employed there; there are no protests because the industry is not on Earth; there is no politics because algorithms don't need votes.

Of course, this is just an extreme metaphor. But the actual development will definitely be more chaotic and clumsy.

This metaphor reveals a cruel truth: once AI crosses a certain threshold of autonomy, human society will no longer be a necessary path.

When technological evolution no longer needs to match the rhythm of humans, the world we are familiar with begins to become unrecognizable.

The End of Labor

"Humans" Are No Longer the Anchor Point of Growth

In all human economic systems, there is a premise: capital can expand, but labor cannot.

You can build more factories and print more money, but "humans" are always a slow variable.

It takes a long time, expensive education and irreversible experience accumulation to train a skilled worker, engineer or researcher.

Because of this scarcity, there are wage, employment, welfare systems, and modern civilization based on them.

But Steve Newman pointed out:

Once labor has "scalability (Scalable Labor)", the entire structure will be shattered.

In his hypothesis, AI will replace and surpass humans to become an "infinitely replicable labor unit".

It takes 20 years to train a mature human expert; it only takes a few seconds to clone a top digital brain with AI.

Historically, technological progress has always been accompanied by the elimination of old jobs and the creation of new bottlenecks at the same time.

Just as the steam engine eliminated physical labor but also magnified the demand for management, engineering and organization; the computer automated calculations but made "cognitive labor" more important.

But what Newman is worried about is:

When AI can not only do work but also learn the "next thing" faster than humans, this new bottleneck may no longer appear.

This sign has already emerged in 2025. Although multiple studies have shown that current AI programming tools may reduce efficiency in complex tasks, enterprises have not stopped investing.

The logic of the industry has changed dramatically: the entire development process is "defaulting to AI participation".

A randomized controlled experiment conducted by METR in early 2025 showed that in complex real tasks, senior open - source developers who were allowed to use AI actually took longer on average to complete. Nevertheless, experts and developers' subjective predictions before and after the experiment generally overestimated the efficiency improvement brought by AI.

The final result may not be a lay - off notice, but that such a position will not exist at all.

Newman did not use the common expression of "unemployment crisis", but more calmly pointed out:

When labor becomes a resource that can be expanded at any time and dispatched anywhere like electricity, the intuition about "wages, costs, values" in traditional economics will completely lose its anchor point.

If most production and R & D no longer require humans, then questions like "who will consume and who will support the cycle" will become a cognitive crisis.

The language we rely on to understand how society works is failing. And this is just the first - level breakpoint.

Stagnation of Scientific Research

When the Output of "Truth" Exceeds Human Bandwidth

Steve Newman removed the last fortress in the scientific research community - the cognitive rhythm of humans.

He put forward a chilling hypothesis: what will happen if AI becomes the scientific research system itself?

In the metaphor of Ceres, those trillions of "digital brains" are only responsible for pushing up the boundaries of human knowledge.

Once this system is formed, the overall R & D ability of humans will be magnified by a million times.

This doesn't mean that AI is a million times smarter than Einstein, but that for the first time, scientific research has got rid of the two human - specific limitations of "attention" and "lifespan".

In human history, what is truly scarce has never been inspiration, but the time required to verify inspiration.

It may take decades to verify a theory; it also takes a generation's career to prove that a direction is invalid.

But in an AI - dominated system, failure has almost no cost. The model can explore countless paths in parallel, roll back, reorganize and try again at any time.

This trend has already taken root in reality. In the fields of materials science and protein structure prediction, AlphaFold has reshaped the "hypothesis - verification" rhythm in structural biology.

Similar changes also appear in new material discovery, catalyst screening and drug molecule design.

AI is changing from an "assistant" to a "decision - maker", deciding which direction is worth investing in and which direction should be directly abandoned.

But what Newman is really worried about is not the leap in efficiency, but the "lag in understanding":

When the output of scientific research reaches a certain speed, the bottleneck of the world will shift from "how to discover" to "how to understand".

Even if a breakthrough is correct and beneficial, it will become a problem whether human society has enough time to understand its meaning, evaluate its risks and decide whether to deploy it.

This is also the reason why he proposed the term "Unrecognizable Age".

When production, scientific research and deployment form a self - reinforcing closed loop, humans will find that they are being rapidly caught up by AI.

Historical Coordinates

The Critical Point Is Getting Closer

Steve Newman tries to observe AI in a more macroscopic historical coordinate system.

He judges that the stage we are currently in is extremely unusual in itself.

Before entering the agricultural society, productivity could hardly be accumulated; and after the agricultural society, growth was also extremely slow, and any minor technological progress was often quickly offset by the growing population.

This "smooth" state lasted for thousands of years.

Data from the Maddison Project compiled by Our World in Data shows that from 1 AD to 1800, the global GDP curve almost moved along the ground. The real "take - off" and acceleration only occurred in the past two hundred years after the Industrial Revolution.

This also explains why the investment scale of the AI industry seems so "abnormal".

Newman compared a set of extremely impactful data: after considering inflation, the current global annual capital expenditure on AI data centers and infrastructure is rapidly approaching, and may even exceed the peak military expenditure of the United States during World War II in the next few years.

Historically, every technological leap that has changed the world must meet three hard conditions at the same time: the technological path is initially feasible, capital is highly concentrated, and society has the ability to bear long - term trial and error.

In Newman's view, AI is meeting these conditions one by one. Even if the current path encounters bottlenecks, the entire industry has already started exploring alternative solutions simultaneously, from model architecture to energy solutions, without exception.

He uses "Richter magnitude 10.0" to describe this shock - once it occurs, it often doesn't make any sense to discuss "should it or not" afterwards.

Looking back at history from today, it's easy to have an illusion: everything happened logically.

Agriculture emerged, the Industrial Revolution took place, and electricity and information technology spread. Every turning point can be explained as "the conditions are ripe" and "technological necessity".

But for those who were in the middle of it, it was never like this.

Change is not spoken out, but accumulates little by little in the form of budgets, infrastructure and process rewrites.

What Steve Newman wants to remind us of is this sense of danger in the "present progressive tense".

Maybe AI will experience a bubble and many predictions will fall through.

But as long as the investment scale, technological inertia and institutional binding do not loosen, it is difficult for this system to simply return to the origin.

Reference Materials:

https://secondthoughts.ai/p/the-unrecognizable-age

This article is from the WeChat official account "New Intelligence Yuan", author: New Intelligence Yuan, editor: Qing Qing. Republished by 36Kr with permission.