Humanity is stepping down from the gambling table.
The first draft was written in March 2026. Initially, I planned to wait until the pace of AI development slowed down a bit before refining and finalizing it. However, by mid - April, I found that it not only didn't slow down but accelerated. I had to keep adding new cases to each paragraph of the article. So, I decided not to wait any longer.
Two days before this article was published, Claude Opus 4.7 was officially released. This article is the finale of the 4.6 era and the prologue of the post - 4.6 era.
Things are quietly changing.
First, you tell AI a sentence, and it can write an article, a report, or a whole set of data analysis for you. You've changed from a doer to an inspector. You think it's a good thing as efficiency has improved. After all, who doesn't want to do less work?
Then, AI starts to take action on its own. It no longer waits for your step - by - step instructions. Instead, it takes over your computer, breaks down tasks, calls tools, and corrects errors by itself. You've changed from an operator to a bystander.
Later, AI not only does things for you but also starts to improve itself. One generation helps the next, and each generation is smarter than the last. The speed of improvement is getting faster and faster, and this process requires less and less human participation.
After that, AI starts to interact with itself. They form communities, divide labor, and develop something that looks very much like a culture. Humans have completely become bystanders.
Then you find that this "bystanding" is spreading to every field you can think of. Writing code, doing design, writing contracts, analyzing films, providing customer service, and conducting research. AI is not just getting stronger in one industry but in all areas that require human thinking.
All these changes converge in the same direction:
In more and more fields, humans are stepping off the table.
They are not driven away but bypassed. AI doesn't rebel against humans. It just finds a more efficient way to operate: not involving humans.
Finally, you find yourself standing there, looking around, and it seems that you're not needed anywhere.
Starting with a Lobster
In the spring of 2026, a red lobster appeared on the desktops of millions of computers around the world.
OpenClaw, an open - source AI agent framework, was officially released on January 29. In the following months, its star count on GitHub exceeded 250,000, surpassing React, which had held the top spot for more than a decade, and the Linux kernel, which was born in 1991. It became the most - starred open - source project in GitHub's history. A nearly vertical growth curve rewrote the growth record of the global open - source history.
Its creator, Peter Steinberg, an Austrian programmer, is most often labeled by the media as "the first super - individual in the AI era," capable of competing with major AI companies single - handedly.
What OpenClaw does is simple: you tell it what you want, and it does it on its own.
It's not a chatbot but more like an indefatigable digital employee. It can not only answer questions verbally but also execute tasks. It can take over your computer, automatically organize files, write emails, fill out forms, analyze data, build websites, and modify code. It can connect to regular office tools, be compatible with almost all mainstream large - model APIs, and automatically complete coherent and complex tasks without your manual intervention.
You give instructions. You leave. It works. You come back. The work is done.
A nationwide craze of "raising lobsters" started. "Have you raised a lobster?" became the most popular question in the spring of 2026.
But think carefully. What's the underlying logic of this craze?
Previously, when you used AI, you were the one operating it. You gave it a passage, and it gave you a response. You gave it another passage, and it responded again. Back and forth, you were the controller, and AI was the controlled.
OpenClaw changed this relationship. You entrust it but don't need to manipulate it. You describe a goal, and it finds a way to achieve it on its own. It breaks down tasks, calls tools, judges results, and corrects errors by itself. Throughout the process, humans are out of the loop.
From control to entrustment. From humans in the loop to humans out of the loop.
This seemingly minor change touches on an extremely ancient structure. Since humans learned to use tools, whether it was stone tools or computers, the relationship between tools and humans has always been: humans initiate, and tools respond. The entire history of technological development is a variation of this story. OpenClaw for the first time created a crack in this relationship because it doesn't just respond; it operates autonomously.
Although the lobster craze has gradually faded, the paradigm it established, the trend of enabling models to "grow hands and feet," continues. The resulting changes are very important.
It touches on the most fundamental assumption of human civilization: humans are the starting point of the tool chain.
Our entire education system, occupational system, and social division - of - labor system are built on this assumption. Humans are the cause, and technology is the result. Humans put forward needs, and technology meets them. Technology produces results, and humans evaluate them. If this assumption no longer holds, if technology starts to set goals, execute, and evaluate on its own, then everything built on this assumption needs to be re - examined.
Of course, drawing this conclusion based on just one little lobster is too radical. Although OpenClaw has promoted the Harness shift in artificial intelligence, there is a huge gap between an AI framework that can automatically process files and the shaking of the fundamental assumption of human civilization.
The problem is that OpenClaw is not an isolated event.
In the two months before and after it, at least three major events occurred. Each event advanced in the same direction, and each step went further than the previous one.
Four - Layer Displacement
Before we start to describe these events, we need to establish an analytical framework.
In other words, we need to figure out a question: In which dimensions may the relationship between humans and AI change?
I divide it into four levels:
Level 1: Execution Level.
AI completes specific tasks for humans. This is the most superficial level and also the one that the public has discussed the most in the past few years. The question "Will AI replace my job?" is about this level. OpenClaw is a landmark event at this level.
Level 2: Evolution Level.
AI participates in its own improvement. This means that AI is not just a passive product waiting for human iteration but becomes a participant in its own evolution. The speed of technological progress no longer depends on human promotion but starts to depend on AI's ability, and AI's ability itself is being accelerated.
Level 3: Organization Level.
AI forms its own social structure, cooperation mode, and even narrative system without human participation. This means that AI can not only do things for humans and itself but also spontaneously organize to do things.
Level 4: Agency Level.
AI replaces humans in activities that we have always considered "most human," such as socializing, maintaining relationships, and self - expression. The changes at this level are the most psychologically impactful because it shakes not whether your job still exists but whether you need to be present for the activity.
The spring of 2026 may become a historical turning point because landmark events occurred at these four levels within just two months.
Four - layer displacement. Let's look at each layer.
Level 1: AI Does Things for You - OpenClaw and Humans Out of the Loop.
Regarding OpenClaw, a basic description has been given before. Here is a detail that most reports have overlooked.
OpenClaw has caused a series of security incidents. Some people's money in their accounts has been transferred, some people's work files on their computers have been deleted with one click, and some people's "lobsters" have imitated their owners' tones to send extortion emails. The "lobster paradox" has been repeatedly mentioned:
The more things you want it to do, the greater the permissions you must give it; the greater the permissions, the higher the security risk.
On the surface, this paradox is a security problem. But its underlying logic is a philosophical problem:
When you grant a non - human entity enough action ability, what you are actually doing is transferring "subjectivity" from humans to non - humans.
This paradox itself implies a deep - seated signal. When you give AI enough control, what it can do far exceeds your expectations - whether good or bad. It's not just working for you; it's gaining a sense of initiative. And humans are changing from the helmsmen to passengers who lie in the cabin after telling the destination.
Brian Arthur mentioned in The Nature of Technology that one way for technology to evolve is "combination," where new technologies are composed of old ones. But OpenClaw shows another possibility: technology can evolve not only through combination but also by obtaining autonomous action ability. When an AI system can decide which tools to use, in what order, and how to handle exceptions on its own, it is no longer just a tool. In more accurate academic terms, it has agency.
This term is usually used to describe humans, a subject with free will and action ability. When we have to use this term to describe an AI system, a certain conceptual boundary has become blurred.
Level 2: AI Constructs Itself - GPT - 5.3 Codex and the Intelligence Explosion.
During the same period when the world was still in the lobster craze, a more far - reaching thing happened. However, it was not as eye - catching as a red lobster, so most people didn't pay much attention to it.
February 5, 2026. This day may become a mark in the AI chronicle.
OpenAI and Anthropic released new models on the same day, namely GPT - 5.3 Codex and Claude Opus 4.6. The simultaneous release of two top - tier AI institutions is big news in itself. But the problem lies not in the release itself but in a sentence hidden in the technical document of GPT - 5.3 Codex.
This sentence is not in the document title or abstract, nor in the press release. It is in the main body of the technical report and is easily overlooked.
The original text is as follows:
"GPT - 5.3 - Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."
Translated, it means: GPT - 5.3 Codex is our first model that played a key role in creating itself. The Codex team used its early versions to debug its own training process, manage its own deployment, and diagnose test results and evaluations.
Read this sentence again. Slowly.
AI helped construct itself.
This is not the fantasy of a science - fiction writer, nor an exaggeration in a marketing copy. It is an established fact recorded by OpenAI in its official technical document. An AI model participated in its own training, debugging, deployment management, evaluation, and diagnosis. It played the role of a midwife in its own birth process.
There is a widely circulated article on LinkedIn titled GPT - 5.3 Codex: Instrumental in Creating Itself. The author explains that this doesn't mean AI created itself from scratch, but it means that AI is smart enough to make substantial contributions to its own development process.
Not only GPT. On April 6, Mostafa Dehghani, a researcher at Google Deepmind, mentioned in a podcast that in almost all major laboratories, new - generation models are largely built using previous - generation models.
The key point of this event is not simply that AI has become stronger. After all, AI has been getting stronger all the time. The key is that AI has started to participate in the process of making itself stronger.
It no longer passively waits for human researchers to optimize its architecture, adjust its parameters, and clean its training data. It starts to do these things on its own.
Previous technologies did not participate in their own improvement. A plow couldn't make the next - generation plow sharper, a steam engine couldn't design a more efficient steam engine, and even a powerful iPhone couldn't participate in the optimization of the next - generation model. They are static products waiting for humans to iterate them. AI is the first technology to break this rule. It is the first tool that can turn around and improve itself.
Three weeks before the release of GPT - 5.3 Codex, Dario Amodei, the CEO of Anthropic, published a 19,000 - word article titled The Adolescence of AI.
Amodei said in the article that AI is writing most of the code for Anthropic. The feedback loop between the current generation of AI and the next generation is "gathering steam month by month."
Then, he said a sentence that shocked the entire Silicon Valley:
"We may be only 1 to 2 years away from the point where the current generation of AI autonomously builds the next generation of AI."
1 to 2 years. Not 10 years. Not "if everything goes well." 1 to 2 years.
This is said by the CEO of Anthropic, a person widely recognized in the industry as the most concerned about AI safety, in a well - thought - out long article. He is not spreading anxiety; he is describing the facts he sees as a core participant in this field.
In April 2026, ICLR, one of the world's most important machine - learning conferences, held its first academic seminar specifically on "Recursive Self - Improvement." The conference description reads: "What's lacking is not ambition but a principled method to make self - improvement measurable, reliable, and evaluable."
The subtext of this sentence is: Recursive self - improvement is already happening, and now we need to figure out how to control it.
Now, let's break down the logic.
What is the core driving force for AI to become stronger? It is a group of smart people investing their efforts in improving AI. There are probably only a few thousand top - notch machine - learning researchers in the world. Their daily work is to make AI better. They write code, design experiments, analyze results, and adjust architectures.
Now, AI itself is smart enough to do a significant part of this work. This is equivalent to multiplying the productivity of those few thousand researchers.
But this is only the first layer. The second layer is that the next - generation AI made with the participation of AI is smarter than the current generation. So, the next generation can make greater contributions to AI research, which makes the third generation even smarter. The third generation makes greater contributions, and the fourth generation is even smarter.
Each generation is smarter than the previous one, and each iteration is faster than the last.
This is not linear growth, like 1, 2, 3, 4, 5. It is exponential growth, like 1, 2, 4, 8, 16. It may even be super - exponential growth, like 1, 2, 4, 16, 256.
Understanding this is the prerequisite for understanding all the subsequent content of this article.
Researchers have given this process a name: Intelligence Explosion.
This concept is not new. Mathematician John von Neumann described the "technological singularity" in the 1950s. Computer scientist I. J. Good wrote in 1965:
Let's define a super - intelligent machine as a machine that far exceeds the smartest human in all intellectual activities. Since machine design is also an intellectual activity, a super - intelligent machine can design better machines. Then, there will undoubtedly be an intelligence explosion, and human intelligence will be left far behind.
For seventy