HomeArticle

Humanity is stepping down from the gambling table.

腾讯研究院2026-04-20 17:29
An old card table, and a new game.

The first draft was written in March 2026. Initially, I planned to wait until the development pace of AI slowed down a bit before refining and finalizing it. However, by mid - April, I found that instead of slowing down, it was accelerating. I had to keep adding new cases to each paragraph of the article. So, I decided not to wait any longer.

Just two days before this article was published, Claude Opus 4.7 was officially released. This article is the finale of the 4.6 era and also the prologue of the post - 4.6 era.

Things are quietly changing.

First, you tell AI a sentence, and it can write an article, a report, or a whole set of data analysis for you. You've changed from a doer to an inspector. You think it's a good thing as it improves efficiency. After all, who doesn't want to do less work?

Then, AI starts to take action on its own. It no longer waits for your step - by - step instructions. Instead, it takes over your computer, breaks down tasks on its own, calls tools on its own, and corrects errors on its own. You've changed from an operator to a bystander.

Later, AI not only does things for you but also starts to improve itself. One generation helps the next, and each generation is smarter than the previous one. The speed of improvement is getting faster and faster, and this process requires less and less human participation.

After that, AI starts to interact with itself. They form communities on their own, divide labor and cooperate, and develop something that seems very much like a culture. Humans have completely become bystanders.

Then you find that this kind of "bystanding" is spreading to every field you can think of. Writing code, doing design, writing contracts, reviewing films, doing customer service, and doing research. AI is not just getting stronger in one industry but is getting stronger simultaneously in all areas that require human thinking.

All these changes converge in the same direction:

In more and more fields, humans are stepping down from the game.

They are not driven away but bypassed. AI doesn't rebel against humans. It just finds a more efficient way to operate: leaving humans out of the game.

Finally, you find that you're standing there, looking around, and it seems that you're not needed anywhere.

Starting with a Lobster

In the spring of 2026, a red lobster appeared on the desktops of tens of millions of computers around the world.

OpenClaw, an open - source AI agent framework, was officially released on January 29. In the following months, its star count on GitHub exceeded 250,000, surpassing React, which had held the top position for more than a decade, and the Linux kernel, which was born in 1991. It became the open - source project with the most stars in GitHub's history. A nearly vertical growth curve rewritten the growth record of the global open - source history.

Its creator, Peter Steinberg, an Austrian programmer, is often labeled by the media as "the first super - individual in the AI era", capable of competing with major artificial intelligence companies on his own.

What OpenClaw does is simple: you tell it what you want, and it does it on its own.

It's not a chatbot but more like an indefatigable digital employee. It can not only answer questions verbally but also execute tasks physically. It can take over your computer, automatically organize files, write emails, fill out forms, analyze data, build websites, and modify code. It can connect to regular office tools, be compatible with almost all mainstream large - model APIs, and automatically complete coherent and complex tasks without your manual intervention.

You give an instruction. You leave. It works. You come back. The work is done.

A nationwide craze for "raising lobsters" began. "Have you raised a lobster?" became the most popular question in the spring of 2026.

But if you think about it carefully, what's the underlying logic of this craze?

In the past, when you used AI, you were the one operating it. You gave it a passage, and it gave you a response. You gave it another passage, and it responded again. Back and forth, you were the controller, and AI was the controlled.

OpenClaw has changed this relationship. You entrust it but don't need to manipulate it. You describe a goal, and it figures out how to achieve it on its own. It breaks down tasks, calls tools, judges results, and corrects errors on its own. Throughout the process, humans are out of the loop.

From control to entrustment. From humans being in the loop to being out of the loop.

This seemingly small change touches an extremely ancient structure. Since humans learned to use tools, whether it was stone tools or computers, the relationship between tools and humans has always been: humans initiate, and tools respond. The entire history of technological development is a variation of this story. OpenClaw has created a crack in this relationship for the first time because it doesn't just respond; it operates autonomously.

Although the lobster craze has gradually faded, the paradigm it established, the trend of enabling models to "grow hands and feet", continues. The resulting changes are very important.

It touches one of the most fundamental assumptions of human civilization: humans are the starting point of the tool chain.

Our entire education system, occupational system, and social division - of - labor system are all based on this assumption. Humans are the cause, and technology is the result. Humans put forward requirements, and technology meets them. Technology produces, and humans evaluate. If this assumption no longer holds, if technology starts to set goals, execute, and evaluate on its own, then everything based on this assumption needs to be re - examined.

Of course, drawing this conclusion based on just one little lobster is too radical. Although OpenClaw has promoted the Harness shift in artificial intelligence, there is a huge gap between an AI framework that can automatically process files and the shaking of the underlying assumptions of human civilization.

The problem is that OpenClaw is not an isolated event.

In the two months before and after it, at least three major events occurred. Each event advanced in the same direction, and each step went further than the previous one.

Four - layer Displacement

Before we start to describe these events, we need to establish an analytical framework.

In other words, we need to figure out a question: In which dimensions may the relationship between humans and AI change?

I divide it into four levels:

Level 1: Execution level.

AI completes specific tasks for humans. This is the most superficial level and also the one that the public has discussed the most in the past few years. The question "Will AI replace my job?" is about this level. OpenClaw is a landmark event at this level.

Level 2: Evolution level.

AI participates in its own improvement. This means that AI is not just a passive product waiting for human iteration but becomes a participant in its own evolution. The speed of technological progress no longer depends on human promotion but starts to depend on AI's ability, and AI's ability itself is being accelerated.

Level 3: Organization level.

AI forms its own social structure, cooperation mode, and even narrative system without human participation. This means that AI can not only do things for humans and for itself but also spontaneously organize to do things.

Level 4: Agency level.

AI replaces humans in activities that we have always considered "most human", such as socializing, relationship maintenance, and self - expression. The changes at this level are the most psychologically impactful because it shakes not whether your job still exists but whether you need to be present for these things.

The spring of 2026 may become a historical turning point because landmark events occurred at these four levels within just two months.

Four - layer displacement. Let's look at each layer.

Level 1: AI does things for you - OpenClaw and humans out of the loop.

Regarding OpenClaw, the basic description has been given before. Here is a detail that most reports have overlooked.

OpenClaw has caused a series of security incidents. Some people had money transferred from their accounts, some had their work files on the computer deleted with a single click, and some people's "lobsters" imitated their owners' tones to send extortion emails. The "lobster paradox" has been repeatedly mentioned:

The more things you want it to do, the greater the permissions you must give it; the greater the permissions, the higher the security risk.

On the surface, this paradox is a security problem. But its underlying logic is a philosophical problem:

When you grant a non - human entity enough action ability, what you are actually doing is transferring "subjectivity" from humans to non - humans.

This paradox itself implies a deep - seated signal. When you give enough control to AI, what it can do far exceeds your expectations - whether good or bad. It's not just working for you; it's gaining a kind of initiative. And humans are changing from the helmsman to a passenger who lies in the cabin after telling the destination.

Brian Arthur mentioned in The Nature of Technology that one way for technology to evolve is "combination", where new technologies are formed by combining old technologies. But OpenClaw shows another possibility: technology can not only evolve through combination but also through obtaining autonomous action ability. When an AI system can decide which tools to call, in what order, and how to handle exceptions on its own, it is no longer just a tool. In more accurate academic terms, it has agency.

This term is usually used to describe humans, a subject with free will and action ability. When we have to use this term to describe an AI system, a certain conceptual boundary has become blurred.

Level 2: AI constructs itself - GPT - 5.3 Codex and the intelligence explosion.

During the same period when the world was still in the lobster craze, a more far - reaching event occurred. However, it was not as eye - catching as a red lobster, so most people didn't pay much attention to it.

February 5, 2026. This day may become a mark in the AI chronicle.

OpenAI and Anthropic released new models on the same day, namely GPT - 5.3 Codex and Claude Opus 4.6. The fact that two top - tier AI institutions released models at the same time is big news in itself. But the problem is not the release itself but a sentence hidden in the technical documentation of GPT - 5.3 Codex.

This sentence is not in the title or abstract of the document, nor in the press release. It is in the main text of the technical report and is easily overlooked.

The original text is as follows:

"GPT - 5.3 - Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

Translated, it means: GPT - 5.3 Codex is our first model that played a key role in creating itself. The Codex team used its early versions to debug its own training process, manage its own deployment, and diagnose test results and evaluations.

Read this sentence again. Slowly.

AI helped construct itself.

This is not the fantasy of a science - fiction writer, nor an exaggeration in marketing copy. This is an established fact recorded by OpenAI in its official technical documentation. An AI model participated in its own training, debugging, deployment management, and evaluation diagnosis. It played the role of a midwife in its own birth process.

There is a widely circulated article on LinkedIn titled GPT - 5.3 Codex: Instrumental in Creating Itself. The author explains that this doesn't mean that AI created itself from scratch, but it means that AI is smart enough to make substantial contributions to its own development process.

Not only GPT, on April 6, Mostafa Dehghani, a researcher at Google Deepmind, mentioned in a podcast that in almost all major laboratories, the new - generation models are largely built using the previous - generation models.

The key point of this event is not that AI has become stronger, because AI has always been getting stronger. The key is that AI has started to participate in the process of making itself stronger.

It no longer passively waits for human researchers to optimize its architecture, adjust its parameters, and clean its training data. It starts to do these things on its own.

In the past, technologies did not participate in their own improvement. A plow couldn't make the next - generation plow sharper, a steam engine couldn't design a more efficient steam engine, and even a powerful iPhone couldn't participate in the optimization of the next - generation model. They are static products waiting for humans to iterate them. AI is the first technology to break this rule. It is the first tool that can turn around and improve itself.

Three weeks before the release of GPT - 5.3 Codex, Dario Amodei, the CEO of Anthropic, published a 19,000 - word article titled The Adolescence of AI.

Amodei said in the article that AI is writing most of the code for Anthropic. The feedback loop between the current - generation AI and the next - generation AI is "gathering steam month by month".

Then, he said a sentence that shocked the entire Silicon Valley:

"We may be only 1 to 2 years away from the node where the current - generation AI autonomously builds the next - generation AI."

1 to 2 years. Not 10 years. Not "if everything goes well". 1 to 2 years.

This is what the CEO of Anthropic, a person widely recognized in the industry as the most concerned about AI safety, said in a well - thought - out long article. He is not spreading anxiety; he is describing the facts as the most core participant in this field.

In April 2026, ICLR, one of the world's most important machine - learning conferences, held its first academic seminar specifically on "Recursive Self - Improvement". The conference description reads: "What's lacking is not ambition but a principled method to make self - improvement measurable, reliable, and evaluable."

The subtext of this sentence is: Recursive self - improvement is already happening, and now we need to figure out how to control it.

Now, let's break down the logic.

What is the core driving force for AI to become stronger? It's a group of smart people investing their efforts in improving AI. There may be only a few thousand top - notch machine - learning researchers in the world, and their daily work is to make AI better. They write code, design experiments, analyze results, and adjust architectures.

Now, AI itself is smart enough to do a significant part of this work. This is equivalent to multiplying the productivity of those few thousand researchers.

But this is only the first layer. The second layer is that the next - generation AI made with the participation of AI is smarter than the current generation. So, the next generation can make greater contributions to AI research, which makes the third generation even smarter. The third generation makes greater contributions, and the fourth generation is even smarter.

Each generation is smarter than the previous one, and each iteration is faster than the previous one.

This is not linear growth, like 1, 2, 3, 4, 5. This is exponential growth, like 1, 2, 4, 8, 16. It may even be hyper - exponential growth, like 1, 2, 4, 16, 256.

Understanding this is the prerequisite for understanding all the subsequent content of this article.

Researchers have given this process a name: Intelligence Explosion.

This concept is not new. Mathematician John von Neumann described the "technological singularity" in the 1950s. Computer scientist I. J. Good wrote in 1965:

Let's define a super - intelligent machine as a machine that far exceeds the smartest humans in all intellectual activities. Since machine design is also an intellectual activity, a super - intelligent