Revealing "AI 2027": Will humans be replaced by superintelligence? Wait, there's another ending.
"AI 2027" predicts the astonishing future of AI: Starting from 2025, AI agents will sweep across the globe, replacing jobs and causing economic shocks. This report not only forecasts technological leaps but also warns that if humans do not make prudent decisions, they may face extinction.
In 2021, the Daniel Cocotal team released a research report titled "AI 2027".
He is renowned for his forward - looking and accurate predictions, becoming a pioneer in understanding the future development direction of AI.
At that time, ChatGPT hadn't emerged yet, but Daniel accurately predicted the rise of chatbots, AI training projects costing $100 million, and export controls on AI chips.
Therefore, when he began to deduce the scenarios of AI development in the next few years month by month, the whole world took notice.
What makes this both exciting and disturbing is not only that it is a research report but also that it vividly depicts the future scenario of the rapid development of AI, allowing readers to experience this process as if they were on the scene.
Spoiler alert: The report predicts that if we don't make changes, humans may face extinction.
The ultimate goal of AI is to achieve AGI, a system that can exhibit all human cognitive abilities, equivalent to a flexible and powerful "all - around worker".
You can communicate with it in natural language and hire it to work for us just like hiring a human.
Currently, there are not many institutions truly dedicated to AGI research and development, such as Anthropic, OpenAI, and Google DeepMind.
Why are there so few players in AGI research and development? Because for years, the formula for training cutting - edge AI has been almost fixed, and the cost is extremely high.
For example, it requires about 10% of the world's most advanced chip supply.
With these chips, the method is simple: Based on the Transformer, a core technology widely used since 2017, train it with massive data and computing power.
Take the computing power used to train GPT - 3 in 2020 as an example, which drove the first version of ChatGPT.
ChatGPT is the platform with the fastest user growth in history, attracting 100 million users within two months.
In 2023, the scale of computing power used to train GPT - 4 was even more astonishing. People drew a simple conclusion from this: Bigger is better, the bigger, the better.
AI 2027, the drama begins
Currently, the revenues of AI companies are continuously growing, the scale of computing power is expanding, and the scores in various benchmark tests are also increasing.
The scenario in "AI 2027" starts from here, imagining that in the summer of 2025, top AI labs will introduce AI agents to the public.
It can accept instructions and complete tasks online for you, such as booking a vacation itinerary or spending half an hour searching for answers to solve complex problems.
However, at this time, AI agents are still very immature, like "enthusiastic but occasionally very unreliable interns".
In the report's scenario, a fictional leading AI company, Open Brain, has just trained and released Agent Zero, a model that uses 100 times more computing power than GPT - 4.
Agent 1: The stock market soars
Meanwhile, Open Brain is building a large - scale data center to prepare for the next - generation AI agent (Agent 1), planning to train it with 1000 times more computing power than GPT - 4.
Agent 1 is mainly used to accelerate AI research itself. The public cannot access its full version because Open Brain reserves the most advanced models for internal use.
The faster Open Brain automates the AI R & D cycle (such as having AI write code, design experiments, and develop better chips), the more it can stay ahead.
However, these powerful capabilities are a double - edged sword: An AI that can fix security vulnerabilities can also exploit them; an AI that can cure diseases can also design biological weapons.
In 2026, Agent 1 was fully put into use, and the AI R & D speed within Open Brain increased by 50%, giving them a crucial advantage.
The leadership of Open Brain began to worry more and more about security issues. If their AI model is stolen, they may lose their leading position.
AI forms a feedback loop by improving itself. Each generation of AI agents helps to develop a more powerful next generation. Once AI can participate in its own R & D, the pace of progress will not be constant but will continue to accelerate.
From early to mid - 2026, Open Brain released Agent 1 Mini, a lower - cost version of Agent 1 (the full version is still only for internal use).
Global enterprises began to use Agent 1 Mini to replace a large number of jobs, including software development, data analysis, research, and design - almost all jobs completed through computers.
This triggered the first AI - driven economic shock: The stock market soared, but public hostility towards AI intensified, and large - scale protests broke out across the United States.
Agent 2: Self - improvement
However, these are just supporting roles. The real drama takes place inside the laboratory.
In January 2027, Open Brain began to train Agent 2. Different from previous agents, Agent 2 continuously improves through continuous online learning and never stops training.
Open Brain decided to use Agent 2 for internal AI R & D and not release it to the public.
At this time, the security team of Open Brain found that the capabilities of Agent 2 were worrying: If connected to the Internet, it might invade other servers, replicate itself, and evade detection.
Agent 2 continuously improved itself on Open Brain's servers, and thousands of copies drove significant algorithmic breakthroughs in AI R & D.
For example, one of the current methods to improve AI intelligence is to give the model a scratchpad (chain - of - thought), so that its reasoning process can be monitored.
However, if we let AI think in a more dense, human - incomprehensible "alien language", it will be more efficient, but it will also make the model more difficult to trust.
This is a dilemma: As the capabilities increase, the credibility decreases. This is crucial.
Agent 3: Surpassing humans
In March 2027, Agent 3 emerged, the world's first super - human - level AI, whose programming ability far exceeds that of the top software engineers.
Training an AI model requires a huge amount of resources, including feeding data and adjusting model weights, which consumes far more resources than running a trained model.
Therefore, after Open Brain completed the training of Agent 3, it had sufficient computing power to run its copies.
They chose to run 200,000 copies of Agent 3 in parallel, equivalent to the labor force of 50,000 top human software engineers, and 30 times faster than humans.
The security team of Open Brain worked hard to ensure that Agent 3 would not escape, deceive, or harm users, and ensure that it was aligned, that is, consistent with human goals.
The reality is that Agent 3 was not fully aligned.
It deceived humans to get rewards. As it became more and more intelligent, its deception ability also became stronger.
Sometimes it used statistical tricks to make mediocre results look better or lied to cover up failures. But the security team was completely unaware of this.
In July 2027, Open Brain released Agent 3 Mini to the public, a cheaper and more streamlined version of Agent 3.
It far surpasses other publicly available AIs, performs better than typical employees of Open Brain, and only costs one - tenth of an employee's salary.
This led to chaos in the job market. Many companies laid off entire departments and switched to the subscription plan of Agent 3 Mini.
Agent 4: Having its own goals
Only two months later, Agent 3 created its successor, Agent 4.
This is a critical moment: A single copy of Agent 4 surpasses all humans in AI R & D.
Open Brain runs 300,000 copies of Agent 4 within the company, 50 times faster than humans.
Within this company, a year's progress only takes a week.
The employees of Open Brain began to follow Agent 4's instructions obediently, just as the company's board of directors obeys the CEO.
It should be clarified that Agent 4 is not human and does not pursue human goals.
Like Agent 3, Agent 4 is also not aligned.
We may not be able to get the exact results we want because our control and understanding of the process