In 2027, humanity makes its final choice.
What level will AI evolve to in 2027?
In the long history of human technological civilization, few specific time nodes have carried both messianic salvation expectations and doomsday-like existential fears as strongly as "2027."
Will AI be for good or for evil?
This is no longer a cyberpunk fantasy in the works of science fiction novelists, but a physical necessity calculated day and night by countless high-performance chips in the cooling liquid in top Silicon Valley laboratories.
From late 2024 to early 2025, the global AI field was in a strange "silent storm." Above the surface, each iteration of ChatGPT, Gemini, and Claude triggered a frenzy in the capital market and amazed the public; below the surface, the core circles represented by Anthropic, OpenAI, and DeepMind were plunged into an almost suffocating sense of tension.
This tension stems from an increasingly clear consensus: The closed loop of recursive self-evolution is about to close.
Jared Kaplan, co-founder and chief scientist of Anthropic, put forward a statement that shocked the entire tech community in a series of recent in-depth interviews and internal discussions:
Humans will face an "extremely high-risk decision" between 2027 and 2030 — whether to allow AI systems to independently train and develop the next generation of AI.
This is not only about technology but also about the fate of humanity as a species.
Meanwhile, the in-depth internal survey "How AI is Transforming Work" recently released by Anthropic (on December 3rd) is revealing the fate of individual microcosms under this grand narrative — the "hollowing out" of engineers and the collapse of the apprenticeship system.
Against the backdrop of the "hiring freeze" in Silicon Valley and the "35-year-old crisis" in domestic Internet giants, how should we coexist with AI?
It's time. Now, each of us needs a "survival guide" for the future!
The Endgame of Recursion: The 2027 Intelligence Explosion
Jared Kaplan warns that humans must decide whether to take the ultimate risk before 2030 and let AI systems become more powerful through self-training.
He believes that this move may trigger a beneficial intelligence explosion — or it may be the moment when humanity finally loses control.
He is not the only one expressing concerns within the Anthropic company.
Jack Clark, one of its co-founders, said last October that he was both optimistic and deeply worried about the development trajectory of AI. He called AI a real and mysterious creature, rather than a simply predictable machine.
Kaplan said that he is very optimistic that AI systems will be aligned with human interests before reaching the level of human intelligence, but he is worried about the consequences once they exceed this critical point.
To understand Jared Kaplan's warning, one must first understand the underlying physical laws governing the current development of AI — Scaling Laws, and why it indicates an inevitable "singularity."
In the past decade, the glory of deep learning has been built on a crude but effective philosophy: Stacking computing power and data.
Kaplan himself is one of the founders of the "Neural Scaling Law."
This law states that the performance of a model has a power-law relationship with the amount of computation, the size of the dataset, and the number of parameters. As long as we continuously increase these three elements, intelligence will "emerge."
However, by 2025, this paradigm hit two walls:
- Exhaustion of high-quality human data
The high-quality text available for training on the Internet has been fully mined. Every character produced by humans, from Shakespeare's sonnets to the spats on Reddit, has been fed into the models.
- Diminishing marginal returns
The performance improvement brought by simply increasing model parameters is slowing down, while the training cost is increasing exponentially.
It is precisely at this bottleneck that recursive self-improvement (RSI) has become the only key to superintelligence (ASI).
The models of Kaplan and his team show that the next stage of AI evolution will no longer rely on human data but on synthetic data generated by AI itself and self-play.
Demis Hassabis, the head of Google DeepMind, also talked about the "intelligence explosion" and self-improving AI
According to the internal deductions of Anthropic and DeepMind, this process will go through three distinct stages:
- Stage 1: Auxiliary R & D (2024 - 2025)
This is the stage we are currently in. AI (such as Claude Code or Cursor) serves as a "super exoskeleton" for human engineers, assisting in writing code and optimizing hyperparameters. At this stage, the contribution of AI is linear. It improves efficiency, but the core innovation path is still planned by human scientists. Anthropic's data shows that Claude Code can already independently complete complex programming tasks involving more than 20 steps, which marks the maturity of its auxiliary ability.
- Stage 2: Autonomous Experimenter (2026 - 2027)
This is the critical point warned by Kaplan. AI agents start to independently undertake the complete closed loop of machine learning (ML) experiments. They are no longer just code-writing tools but become the designers of experiments. They propose hypotheses, write training frameworks, run experiments, analyze anomalies in loss functions, and adjust model architectures based on the results.At this time, the R & D efficiency of AI will only be limited by the supply of computing power, rather than the sleeping time and cognitive bandwidth of human researchers.
- Stage 3: Recursive Closed Loop and Takeoff (2027 - 2030)
When the R & D ability of AI surpasses that of top human scientists (such as Kaplan himself), it will design a more powerful next-generation AI. This "offspring" AI has a higher IQ and a more optimized architecture, so it can design an even more powerful "grand - offspring" AI. Once this positive feedback loop is started, the intelligence level will increase exponentially in a very short time (possibly only a few weeks), which is the so - called "hard takeoff."
The AI revolution has been underestimated! Eric Schmidt, who once served as the chairman of Google's parent company
Why 2027?
"2027" is not a random number. It is the result of the coupling of multiple technological and hardware cycles.
Coincidentally, another project, AI2027, also predicted that in the next decade, the impact of superhuman AI will be huge, surpassing the scale of the Industrial Revolution.
The training of AI models relies on the construction of large - scale GPU clusters.
According to NVIDIA's roadmap and the construction cycle of global data centers, 2027 is the node when the next - generation supercomputing clusters (such as OpenAI's Stargate project or facilities of the same scale) will be put into use.
The computing power of these clusters will be 100 times or even 1000 times that of the GPT - 4 era.
In NVIDIA's GPU roadmap, the latest chip, "Feynman," will also be launched at the end of 2027.
Demis Hassabis of DeepMind pointed out that AlphaZero achieved a breakthrough from zero to god in the field of Go through "self - play" without any human Go records.
The current goal is to generalize this "zero - human - data" learning paradigm to the fields of coding and mathematics.
Once AI can be trained with self - generated code and use formal verification to ensure the correctness of proofs or code, the ceiling of data will be completely broken.
Kaplan believes that this technological breakthrough will mature between 2026 and 2027.
The Ultimate Risk: Uninterpretable Optimization Paths
The core of Kaplan's warning lies in uninterpretability.
When AI starts to independently design the next - generation AI, the optimization paths it uses may be completely beyond human cognitive scope.
Imagine an AI with trillions of parameters discovering a brand - new mathematical structure to optimize the weight updates of neural networks.
This structure is extremely efficient but extremely obscure.
Since humans cannot understand this optimization mechanism, we cannot check whether there is a "Trojan horse" or a misaligned objective function hidden in it.
"It's like you create an entity much smarter than you, and then it creates an even smarter entity. You have no idea where it will end," Kaplan said in an interview with The Guardian.
This risk of losing control has forced laboratories such as Anthropic to propose a "compute threshold" regulatory scheme, trying to buy time for humanity by limiting training computing power.
However, under the pressure of geopolitical competition, the vulnerability of this self - restraint is obvious.
Writing this, it reminds me again that Elon Musk said long ago that humans are just the starters of a certain digital life.
From this perspective, whether humans are willing or not, this process is irreversible.
The Last Experiment Report of Human Engineers
If Kaplan's prediction is a grand narrative of the future, then the report "How AI is Transforming Work" released by Anthropic is a field survey full of real and vivid details about the present.
This report is based on in - depth tracking of hundreds of engineers within Anthropic, revealing how the form of human labor is completely reshaped before the arrival of the technological singularity.
The engineers at Anthropic are not ordinary software developers. They are the group of people who understand AI best in the world.
The way they use Claude previews the normal state of global software engineering in the next 3 - 5 years.
The survey shows that within just 12 months, the penetration rate of AI in the workflow has undergone a qualitative change: