Scaling has no walls, Anthropic CEO: The truth of AI labs far exceeds public imagination
Anthropic CEO Dario Amodei dropped a bombshell at the Morgan Stanley conference: The Scaling Law has not hit a wall at all, and there will be a radical acceleration in 2026. He made an accurate analogy with the fable of rice on a chessboard - we are standing on the 40th square. All the shocks from the first 39 squares combined are just a fraction of the last 24 squares. No one is ready for this exponential surge.
Has the Scaling Law hit a wall?
On March 3rd, Anthropic CEO Dario Amodei set the tone directly at the Morgan Stanley TMT Annual Conference:
We do not see hitting the wall.
We don't see a wall.
His core point can be summed up in one sentence:
The Scaling Law not only hasn't hit a wall, but will also experience a radical acceleration in 2026!
Moreover, this speed will catch everyone off guard.
Dario Amodei completely refuted the theory that AI has hit a wall.
If someone else said this, you could easily dismiss it as boasting.
But the person who said this is the head of one of the world's most highly valued AI companies, in charge of the Claude series of models. The company's annualized revenue has been estimated by the outside world to reach the level of $19 billion (almost catching up with OpenAI).
Moreover, in the enterprise API market, Anthropic has a dominant share.
He's not making empty promises; he's describing what he sees every day.
The Chessboard Story That Even Mathematicians Fear
How can ordinary people experience the feeling of "singularity acceleration"?
Dario told a classic fable at the conference - The Rice on the Chessboard.
You may have heard the story: A chessboard has 64 squares. You put 1 grain of rice on the first square, 2 on the second, 4 on the third, and double the amount for each subsequent square.
It sounds harmless, right? But what's the number of grains on the last square?
It's about 1844 quintillion grains of rice.
That's 18,446,744,073,709,551,615 grains.
This number is so large that even if you planted all the rice fields in the world for 10,000 years, you couldn't produce that many grains.
What makes this story terrifying is not the ending, but the illusion during the process.
The total number of grains on the first 32 squares is only about 4.2 billion. It sounds like a lot, but when you look at the whole chessboard?
It's not even a fraction. The real explosive growth is concentrated in the last 32 squares.
Starting from the 33rd square, the increase in each square exceeds the sum of all the previous squares.
This is the most tricky part of exponential growth - it's slow in the first half, making you think it's just so-so; then it suddenly takes off in the second half, and by the time you realize it, it's too late.
Dario said: We are now standing on the 40th square of this chessboard.
That means all the growth from the first 39 squares - from GPT - 3 to ChatGPT to GPT - 4 to Claude 3.5 and then to Opus 4.6, the flourishing of various models today - all of this combined may just be a prelude to the last 24 squares.
His exact words clearly mean: The speed from the 40th to the 64th square will be faster than anything you've ever seen.
Even if you already think AI is developing fast, you're still not ready.
2026: The Year of Radical Acceleration
Radical acceleration - this is the term Dario used to describe 2026.
Note that he didn't say steady improvement or something to look forward to. He used radical acceleration.
Let's think about what happened in the past two years:
At the beginning of 2024, most people were still using ChatGPT to write emails and ask for recipes, thinking AI was fun but just so-so.
By the end of 2024, AI programming assistants began to enter the developers' workflow on a large scale.
In 2025, the concept of AI Agents exploded, and models began to be able to complete multi - step tasks autonomously.
In 2026, what's happening is already overwhelming.
But in Dario's view, all of this is just the story of the first half of the chessboard.
What really excites (or makes him nervous) is: What they see in the laboratory is far more crazy than what the outside world perceives.
This is a classic information asymmetry in the technology industry - The AI capabilities perceived by the public always lag behind the real level inside the laboratory.
When you think, "Wow, Claude has become much smarter," Anthropic may have already gone much further six months ago.
Now, Dario directly says: In 2026, what's in the laboratory will spill over into the real world on a large scale.
The Code Field: The Starting Point of All Explosions
At this conference, Dario revealed a key piece of information: Code generation is currently the strongest leading indicator of the AI capabilities explosion.
He said the progress in this field has exceeded their most optimistic expectations.
What does "exceeding expectations" mean?
Anthropic itself is the best example - they are already using their own models to write code on a large scale internally.
If calculated according to the payment standard, Anthropic would be one of its own biggest customers.
But the more crucial change lies in the spill - over effect of code capabilities:
First stage: The model helps you write code. It saves time and improves efficiency, but in essence, it's still an advanced tool.
Second stage: The model starts to take over all the peripheral work around code - managing servers, controlling clusters, checking visual features, and building toolchains. It not only writes code but also understands the entire context of code operation.
Third stage: The model starts to build scaffolds and tools to make itself work more efficiently. That is - AI starts to use AI to improve AI.
This is why Dario said their end - to - end production efficiency has doubled or tripled. It's not that a single link has increased by 30%, but the efficiency of the entire chain has increased exponentially.
He further predicted that what's happening in the code field will be replicated in every corner of the economy in a slightly slower but identical pattern. Finance, healthcare, law, education, manufacturing... Every industry will go through the same three - stage penetration.
This is a very bold assertion. But if you recall the popularization path of the Internet - first used by programmers, then by enterprises, and finally by everyone - you'll find that the path of AI is almost a replica of history, just 10 times faster.
RSI: AI Learns to Pull Itself Up by Its Bootstraps
Beyond Dario's public statements, there's a more noteworthy signal: Recursive Self - Improvement (RSI).
What is RSI? Simply put, it's AI improving itself.
For example, current AI models are trained by human engineers with a large amount of data. But what if one day, an AI model can discover its own deficiencies, design experiments, optimize parameters, and improve its own capabilities?
It's like a student no longer needing a teacher and starting to set questions for themselves, grade their own papers, and find and fill in knowledge gaps. And after each round of improvement, the questions it sets are better, and the improvement speed is faster.
Pulling oneself up by one's bootstraps and taking off on the spot - although this is a joke in physics, it's becoming a reality in the AI field.
From Dario's speech, it can be seen that Anthropic has probably made substantial progress in this direction.
He mentioned that the model can build tools and scaffolds to improve its own workflow, which is essentially an early form of RSI.
If RSI really makes a breakthrough in 2026, the evolution speed of AI will change from exponential growth to exponential - exponential growth.
Retaining Talent Is More Important Than Anything
Amid the rapid technological development, Dario also shared a very down - to - earth topic: The Talent War.
Last summer, Meta tried to poach Anthropic's researchers, offering prices ranging from $100 million to $500 million - note, this is for a single person.
This figure far exceeds the salaries of top professional athletes.
Faced with such sky - high temptations, what did Anthropic do?
Dario said their attitude towards the team is: You come to Anthropic because of a sense of mission, not because a competitor randomly throws a dart at your name and we'll give you a 10 - fold salary increase. That would only tear apart the team culture.
What was the result? Faced with the temptation of $500 million, only two people from Anthropic went to Meta.
In contrast, OpenAI, which is about 1.5 times the size of Anthropic, lost several times more people.
Dario proudly emphasized: All 7 co - founders of Anthropic are still on the job. You have to look for the first person who left around the 20th employee, and that person left many years after the start - up.
This retention rate is almost impossible in Silicon Valley. In the AI industry where everyone is poached with high salaries, it's a miracle.
His summary is just one sentence: Technology can be bought, but culture can't.
This may sound like a cliché, but in the context of the AI arms race, it has a very real meaning - when your competitors can spend tens of billions on GPUs and billions on data, the only thing that can't be solved with money is a team that truly believes they're doing the right thing.
Where Does the Myth of the Scaling Law Hitting a Wall Come From?
In the past year, the statement that the Scaling Law has hit a wall has repeatedly appeared in the industry.
Its core argument is that as models become larger, the marginal return of increasing computing power and data is decreasing, and the performance improvement is getting slower.
Some researchers even claim that the development of large models is approaching the ceiling.
As early as 2024, Ilya mentioned that the fuel for AI training was running out. (But at that time, Altman said there was no wall.)
This statement is not entirely unfounded - on some specific benchmarks, the progress of the latest models is indeed not as amazing as before.
But Dario clearly disagrees with this conclusion. His refutation logic is also very straightforward:
First, the slowdown you see may just be a minor fluctuation on the exponential curve.
Looking from the 38th to the 39th square on the chessboard, the growth rate only doubles. But this doubling is already an astronomical number in absolute