StartseiteArtikel

Die unglücklichste Generation aller Zeiten? Künstliche Intelligenz verlängert die menschliche Lebenserwartung. Es ist kein Traum mehr, dass die nächste Generation bis 200 Jahre alt wird.

新智元2025-10-29 15:07
Die großen AI-Unternehmen konkurrieren um die Entwicklung von AGI. Energie und Sicherheit werden zu Schlüsselfaktoren. Die Menschheit könnte eine Revolution in der Lebenserwartung und eine Roboter-Gesellschaft erleben.

[Introduction] Is AGI hype or reality? Veteran AI journalist Matt Wolfe goes straight to the core of AI in the United States: from Demis Hassabis's rational caution to Zuckerberg's huge investment, explore the potential explosive risks of recursive self - improvement.

Well - known tech giants such as OpenAI, Google, Microsoft, Meta, and Anthropic are shouting that AGI is coming.

Humanity may be standing at a crossroads of fate: AI may soon evolve on its own, bringing about an intelligence explosion and life extension, or it may create a "black - box language" that humans cannot understand.

Content entrepreneur Matt Wolfe has long been concerned about artificial intelligence and has had the privilege of conducting exclusive interviews with many heavy - weight figures, including Demis Hassabis, the founder of DeepMind and a Nobel laureate, Nadella, the CEO of Microsoft, and Sundar Pichai, the CEO of Google.

This time, Matt Wolfe reveals the tensions beneath the surface: the mad dash of the giants, the energy shortage, and the rise of robots.

What news did Matt hear at the Google I/O Conference? Will the future be a prosperous utopia or a society addicted to pleasure like in "WALL - E"?

Running at Full Speed, Paying Little Attention to Others

Matt has rich first - hand industry observations.

For example, he attends conferences like Google I/O. He communicates with the engineers who are actually responsible for technology R & D, rather than the senior management who are only in charge of publicity. Engineers can directly tell you which technologies really work and which are not so reliable.

What does the "under - the - iceberg" of AI look like in his mind?

In the eyes of the public, these tech giants seem like monsters only concerned about profits and trying every means to make money. They steal users' data to train models and don't care about users at all. It is also a fact that large companies do attach great importance to shareholders' profits in their overall corporate structure.

But when you really communicate with the employees in the company, Matt has never felt that they want to manipulate the public, steal data, or take away people's jobs. They are just extremely excited about the new technologies they are developing and hope to bring new capabilities to humanity.

For example, a few years ago in the press area of Google I/O, Matt had a conversation with a Google employee.

She was nervous and excited because they were about to take the stage to introduce the technology she had developed in the past two years. She didn't need to give a speech on stage, but she cared very much about how the audience would accept the project she was involved in. She said, "I hope people won't hate this new technology. I hope they think it's really cool."

She never thought about "who this technology would take the job from." She just hoped that people would recognize the results of her two - year hard work.

This made Matt think that this "humanity" is the most important thing.

How do AI giants balance speed and safety?

Matt thinks it actually varies from company to company.

For example, Google tends to be more cautious and will ensure that the technology is really ready before launching it.

He also has a lot of interactions with Microsoft. Sometimes, he feels that Microsoft is indeed more radical.

OpenAI is somewhat in between Google and Microsoft. They may not be as radical as Microsoft, but they are not as conservative as Google either.

This difference is actually quite interesting: some companies put safety first, while others are eager to seize the market.

Recently, Zuckerberg has also clearly shifted his focus. He has invested huge amounts of money in the "personal super - intelligence" project and is frantically recruiting talents.

Meanwhile, Musk is also desperately catching up and massively increasing his investment in computing power.

So the question is: Is capital and talent the key to this race?

In Matt's view, the bigger bottleneck is actually energy.

In terms of technologies related to chips and AI training, the United States has an obvious advantage, while China is stronger in energy infrastructure but restricted by chips. In the future in the United States, energy may instead become the biggest obstacle restricting the development of AI.

Of course, a large investment of funds and talents can indeed enable enterprises to catch up quickly.

For example, two years ago, everyone thought that Google was lagging behind and couldn't compare with OpenAI and Microsoft at all. But now Google has almost returned to the forefront and even ranks among the top in some fields.

Have No Choice but to Accelerate

There is still a bit of "idealism" in OpenAI and Google, pursuing a common future for humanity.

For example, Demis Hassabis of DeepMind is one of the most trustworthy people in the industry.

He is very rational and cautious and is well aware of human fears and concerns in the development of AI.

If it were entirely up to him, DeepMind's pace would be much slower and more prudent.

But the reality is that as a large company, Google has other voices pushing for faster commercialization.

But think back, two years ago, ChatGPT took away a large amount of search traffic, and Google earns tens of billions of dollars from search every year.

This is a survival threat to Google. At that time, Demis Hassabis must have also faced great pressure and had no choice but to accelerate.

As for Meta, its goal may not be a "human utopia."

Meta's ultimate goal is fully automated content generation, which may completely reshape the creator economy.

Hype or Reality

Now more and more AI laboratory directors are saying:

We are starting to see signs of AI "recursive self - improvement."

Even many papers say so. For example, the "sky - rocketing" development path mentioned by Leopold Aschenbrenner in his paper.

Paper link: https://situational - awareness.ai/wp - content/uploads/2024/06/situationalawareness.pdf

Is humanity about to experience the so - called "intelligence explosion"?

Matt thinks these statements are more theoretical speculations at present.

These AI giants are highly motivated to make the public think that AGI is coming soon, so that people will always pay attention to them and support them. But in fact, it may not be the case.

But once the real critical point of AGI is reached, the development will really be as rapid as an explosion.

Super artificial intelligence (ASI) is not far from AGI. Once the model has the ability to solve its own problems, the development will accelerate.

So Matt thinks that once AGI appears, ASI will soon follow. At that time, the development curve will no longer be linear but a "hockey - stick" curve - rising sharply.

In some fields, AI performs super - humanly powerful, but it still makes mistakes in some simple tasks. For example, it can solve complex mathematical and energy problems but may not be able to count how many letter "R"s are in "strawberry."

This jagged performance of AI makes some people take it lightly, thinking, "It can't even count letters. How can it be that smart?"

But if it is stronger than anyone in scientific research and engineering, it is enough to change the world.

What really should be worried about is: AI may far exceed humans in AI research itself, being able to independently propose new model architectures, new training methods, and even automatically complete the experimental cycle. Once this happens, humans may not be able to understand its reasoning logic at all.

In fact, a similar prototype has already appeared. For example, Google's DeepMind's AlphaEvolve project is essentially using AI to design new algorithms.

How AlphaEvolve helps Google build a more efficient digital ecosystem

In other words, the beginning of recursive improvement is on the way.

Another specific concern is that AI may create a new language that humans completely cannot understand.

When they use this language to program, communicate, or even create products, humans simply cannot debug.

This sounds like science fiction, but in fact, a similar situation is already happening. For example, Google's search algorithm: I'm afraid no one can fully understand its entire picture today. After years of iteration and the participation of countless people in development, it has become so complex that it surpasses individual understanding.

If future AI systems really develop their own "black - box language," it will bring unprecedented opacity and risks.

And these tech giants hold the power of narrative.

Musk, Zuckerberg, and Altman, their words can often influence the views of millions of people. But their statements often have "hidden motives." For example:

When Zuckerberg claims that "super - intelligence is coming soon," it can both make investors pay and help Meta recruit talents in the AI field;

When Altman emphasizes "safety risks," it is both a manifestation of responsibility and may be to promote regulation to gain a first - mover advantage;

Musk warns about the "AI threat," but at the same time, his xAI is desperately expanding.

The media, as an observer, must reveal the gap between the "public reasons" and the "real motives."

Can the Next Generation Live to 200?

A year ago, Matt thought his child had a 75% chance of living to 200.

Now, he still holds a similar view.

The reason is: AI is rapidly promoting breakthroughs in biomedicine. For example:

AlphaFold can predict the way proteins fold;

Isomorphic Labs is using AI to detect brain tumors one year in advance;

There is also AI that can give early warnings of diabetes several years before symptoms appear.

If these technologies continue to develop, deadly diseases such as cancer and heart disease are very likely to be conquered in the next few decades. That means the lifespan curve will be completely changed.

He thinks all these developments are providing possibilities for "extending human life."

British writer and geriatric biologist Aubrey de Grey proposed the theory of "longevity escape velocity."

The general idea of this theory is that as technology advances, for every year you live, your life expectancy will increase by two years or more.

After humanity really reaches that turning point, humans may become "immortal."

Recently, Aubrey de Grey publicly stated: In 2030 or at the latest in 2035, humanity may have a "technological breakthrough" and achieve the longevity escape velocity.

That is to say, in another five years, if you are still alive, you will have the opportunity to witness all this.

For children under 10 years old now, Matt thinks their probability of living to 200