Anthropic, Rejected by 21 Top VCs: The Most Costly "Overlook" in AI History
In 2021, Anjney Midha took Anthropic's business plan to 22 meetings with top VCs, only to be turned away 21 times.
Fast forward to January 2026, Anthropic secured $25 billion in its latest round of financing, and its valuation soared to $350 billion.
What does this mean? It's equivalent to 10 times the value of OpenAI in 2023.
Those investment tycoons who used "risk control" as an excuse to shut their doors tightly back then are probably lining up in the toilet, crying their eyes out now.
This isn't just a slap in the face; it's the most expensive collective "IQ tax" of this century.
21 Rejection Letters: The "Blind" Moments of Top VCs
Those who rejected Anthropic were all the "heroes" in Midha's eyes in the industry back then.
Look at Anthropic's lineup at that time: core defecting executives from OpenAI and the creators of GPT - 3.
With such a configuration today, the money would arrive before the PPT was even finished.
Midha thought it was a sure thing this time, but reality gave him a hard slap in the face.
In 2021, large - scale models were seen as a bottomless pit for burning money in the eyes of VCs.
Moreover, the people at Anthropic had an almost obsessive commitment to "AI safety" and had a non - profit background. Mainstream VCs simply couldn't understand it at that time, and traditional capital directly labeled it as a "high - risk outlier".
It wasn't until Spark Capital led the Series C financing that these people woke up from their dreams. Jason Shuman later had to admit:
Facts have proven that projects that everyone can understand in the early stage usually don't have much potential.
How high was the price of this "cognitive slowness"?
In May 2021, Anthropic received $124 million in Series A financing led by Jaan Tallinn.
Compared with today's valuation of $350 billion, the 21 institutions that rejected Anthropic back then missed out on nearly 3000 - fold returns.
Risk Control Is the Biggest Risk
In this drama, Sequoia Capital perfectly demonstrated what it means to be "at a loss".
According to reports, Roelof Botha, the global decision - maker of Sequoia, repeatedly refused to lead the investment in the early stage.
The reason sounded very high - minded: "Concentration risk". It meant being afraid of putting all eggs in one AI basket and affecting the balance of asset allocation.
This kind of correct but useless talk in traditional finance is a disaster in the face of exponentially growing AI.
Sequoia only changed its attitude after getting a huge slap in the face. By early 2026, the actual contribution rate of AI investment to the US GDP soared to 40%.
At this time, who still talks about asset allocation? It's a life - saving asset! So, Sequoia's management had a major reshuffle. After Alfred Lin and Pat Grady took over, they quickly overturned Botha's conservative dogma.
Roelof Botha publicly responded to the leadership change at Disrupt in 2025 and defended Sequoia's "freedom of speech" culture.
In January 2026, Sequoia finally plucked up the courage to participate in Anthropic's latest round of financing.
Embarrassingly, the valuation had skyrocketed from $1 billion in Series A to $350 billion.
In order to avoid so - called "risks", Sequoia just watched from the sidelines for 5 years and finally had to pay a "cognitive premium" of more than 300 times with a heavy heart.
This is not just a problem for Sequoia. The data at that time was quite shocking:
Before Spark Capital entered the scene, most VCs would rather invest in those unremarkable SaaS software than touch Anthropic, which burned billions of dollars in computing power every year.
These people were more afraid of being the "first to take the plunge" than making a wrong investment. As a result, they became a laughingstock "swimming naked" in the torrent of the times.
The Dimensionality Reduction Strike of "Non - Mainstream" Capital
While mainstream VCs were still calculating ROI, who saved Anthropic?
A group of "madmen".
Jaan Tallinn, who led the Series A financing in May 2021, is the co - founder of Skype and a fanatical believer in AI safety. He completely subverted the Wall Street logic of throwing money:
I'm not investing to make profits from large - scale models. I'm afraid that out - of - control AI will wipe out humanity.
His logic is "capital replacement". Use money that cares about the survival of humanity to squeeze out the money that only looks at financial statements.
Those who participated in the follow - on investment at that time also included Eric Schmidt, the former CEO of Google, and Dustin Moskovitz, the co - founder of Facebook.
The common characteristics of these people are obvious: they are rich, willful, understand technology, and don't need to look at the faces of LPs.
This also shows that the obsession with AI safety, which was regarded as "poison" by institutional investors in 2021, is actually the strongest moat in the eyes of real technology giants.
Without the money from Tallinn, which was "paying for the survival of humanity", Anthropic would probably have died out in Series A.
It was this life - saving money that allowed them to survive a two - year R & D period without commercial pressure and figure out the core logic of the R1 series of models.
Ironically, the money that was regarded as "charity" at the beginning received the most explosive returns in human financial history in 2026.
The Harsh Truth in 2026: Not Investing in AI Means Waiting for Death
In 2026, capital is scrambling for Anthropic not for making money, but for survival.
Macroeconomic data shows that if AI is excluded, the US GDP growth rate will directly fall below 0.7%.
AI is no longer just a trend; it is the only ventilator for the US economy. Analyst Siddharth's analogy is very straightforward:
Remove the oxygen tube of AI, and the economy will stop immediately.
In the first half of 2025, after excluding information processing equipment and software (i.e., AI infrastructure investment), the real GDP growth rate of the US was close to 0%. Meanwhile, the investment in IPE & S soared by 28%.
The logic of venture capital has also completely changed. In 2026, capital began to shift crazily from general models to vertical - domain intelligent agents.
Amit Goel pointed out that VCs finally realized that enterprise - level AI that focuses on vertical fields and doesn't require coding is the new gold mine.
This is another ironic cycle.
In 2021, VCs rejected Anthropic because they couldn't understand "safety" and "large - scale models".
In 2026, they are being left behind by a new generation of boutique funds because they can't understand "vertical - domain knowledge".
This five - year cognitive war has proven that capital never creates the future. It just pays a high price to buy a standing - room ticket when the future becomes unavoidable.
From 21 rejection letters to a $350 - billion valuation, Anthropic has torn off the most decent disguise in the venture - capital circle with solid data.
Now, when AI has become the only pillar of GDP, the entry of capital has nothing to do with foresight; it's purely a survival instinct.
Stop mythologizing the foresight of VCs. Those 21 rejection letters are iron - clad evidence: most of the $350 billion is the "cognitive tax" paid by those who are slow to catch on.
This is the reality. Either understand and invest in 2021, or kneel and pay the price in 2026.
Reference Materials:
https://x.com/agentic_ai/status/2016013132720963723
https://x.com/jasonfurman/status/1971995367202775284/photo/1
https://www.datastudios.org/post/anthropic-s-history-from-ethical-ai-startup-to-global-tech-powerhouse-the-journey-from-2021-to-2025
This article is from the WeChat official account "New Intelligence Yuan", author: New Intelligence Yuan, editor: Qing Qing. Republished by 36Kr with authorization.