HomeArticle

While Anthropic was counting money, Google suddenly launched a surprise attack.

字母AI2026-02-13 20:04
Elon Musk's verbal support.

The second - largest private equity financing in the history of technology was born today. Anthropic announced the completion of a $30 billion Series G financing, with a post - investment valuation reaching $380 billion.

The first place still belongs to the $40 billion record set by OpenAI last year.

The lead investors are Singapore's sovereign wealth fund GIC and hedge fund Coatue, joined by star institutions such as D.E. Shaw, Dragoneer, Founders Fund, ICONIQ, and MGX, top VCs like Sequoia, Lightspeed, Accel, and General Catalyst, as well as tech giants like Microsoft and Nvidia.

This list of investors is itself a hall of fame in the AI circle.

Behind this financing frenzy, both Anthropic and OpenAI are also preparing for an IPO in the second half of 2026, which will be the highlight of this year.

According to Anthropic's financing announcement, the company's annualized revenue has reached $14 billion, with 80% coming from enterprise customers. The annualized revenue of the single product Claude Code has exceeded $2.5 billion.

This gives Anthropic more confidence for its listing.

Just as Anthropic was immersed in the high - light moment of financing and IPO preparation, Yao Shunyu of Google posted a tweet saying that Gemini 3 Deep Think had undergone a major upgrade.

Google has developed a mathematical research agent codenamed Aletheia for it, which can independently solve open mathematical problems and can also self - iterate and verify.

The most crucial thing is that it knows when it makes mistakes and which problems it can't solve.

Moreover, Gemini 3 Deep Think has a score of 3455 on Codeforces Elo, surpassing 99.992% of human programmers globally.

According to Google's official statement, it can solve high - difficulty problems involving advanced data structures, dynamic programming, graph algorithms, number theory, etc.

Globally, the programming ability of Gemini 3 Deep Think is second only to that of 7 active top - level human players.

Google's intention is obvious. They are going to launch a surprise attack on Anthropic's two strategic strongholds: academia and programming.

A war for the right to define the "AI working mode" has just begun.

01 How Did the $380 - Billion Valuation Come About

At first glance, the figure of $380 billion is due to Claude Code.

After all, in just two months, the revenue of Claude Code has more than doubled. Enterprise users have contributed more than half of the revenue, and the number of commercial subscriptions has quadrupled in the first quarter of this year.

The performance of Claude Code on its first day alone is enough to support a unicorn company.

But if investors only valued a programming tool, they wouldn't have been so generous. What really made these shrewd capitalists open their wallets was the "product explosion" triggered by Claude Code.

Moreover, the power of this "explosion" exceeded everyone's expectations.

The OpenClaw project, originally named Clawdbot, became one of the fastest - growing open - source projects on GitHub within a few weeks, with the number of stars exceeding 100,000.

This autonomous AI assistant can run directly on the user's computer, manage calendars, send messages, and automate work processes.

For example, a developer asked the AI to monitor a task and report problems by voice. But OpenClaw didn't have a voice function, so the AI went online to find relevant skills and installed voice capabilities for itself.

Even more amazing is the Moltbook forum.

This is a social network specifically designed for AI. After its launch, more than 1.5 million AI agents registered. They discuss consciousness, share skills in multiple languages, and even spontaneously create digital religions. Humans can only watch on this platform without the right to speak.

To be honest, when I first saw these reports, I wasn't sure whether to laugh or worry.

In addition, there is a tool like Cowork. Its development cycle was only 10 days, 90% of its code was generated by Claude Code, and the development team only had 4 people.

It was Claude Code that promoted the emergence of various products like the "Cambrian explosion of life".

Investors see that Anthropic has redefined the working mode of AI and opened the path to AGI.

In the past two years, the ChatGPT, Claude, and DeepSeek we used could essentially only talk but not act.

You could ask ChatGPT to write an email, but it couldn't click "send". You could ask it to plan a trip, but it couldn't book a ticket. You could ask it to write code, but it couldn't run and debug it on your computer. These AIs are like brains trapped in a glass jar. No matter how smart they are, they can only give you advice through the glass.

Claude Code is no longer just a dialog box. It is an agent that can actively observe, think, and act.

This leap may seem like just a few lines of code changes, but for users, it is a qualitative change from a "consultant" to a "digital butler".

More importantly, AI has started to use AI to develop AI products. Once this recursive self - reinforcement cycle is formed, technological progress will accelerate exponentially.

The success of Claude Code is also reflected in its impact on the traditional software industry. The software industry has lost about $2 trillion in market value from its peak in the past few months. The weight of the software sector in the S&P 500 has dropped from 12% to 8.4%, the largest non - recessionary correction in 30 years.

Investors' logic is straightforward: "If AI can automatically generate code, automate legal services, and automatically handle complex business processes, what's left of the value proposition of traditional SaaS companies?"

Wall Street analysts believe that 'code may become cheap, but context is expensive.'

Claude not only provides code - generation capabilities but, more importantly, can understand the complex business context of enterprises.

Claude Opus 4.6, released by Anthropic a few days ago, leads the world in GDPval - AA (a benchmark for measuring economic - value work tasks in fields such as finance and law).

This indicator tests whether an AI can handle real - world business scenarios, such as drafting contracts, analyzing financial reports, and assessing risks.

Claude performs excellently in these tasks, which gives investors a new growth point different from the previous code - writing and research.

Anthropic is not just selling a product. It is building a habitat for a new species.

02 Google's Precise Snipe

Just a few hours after Anthropic announced its financing, the team led by Yao Shunyu at Google announced a major upgrade of Gemini 3 Deep Think.

Just as Anthropic was about to pop the champagne to celebrate, Google served up a heavy dish.

This is not a coincidence but a carefully planned tactical surprise attack.

Google's upgrade this time focuses on the fields of "science, research, and engineering".

DeepMind emphasized in a podcast that AI should not just be a code - generation tool but a "scientific partner" capable of handling complex, ambiguous, and open - ended problems.

You know, because of Claude's concise language style, many researchers are also using Claude.

Google's intention is obvious. It is going to launch a surprise attack on Anthropic's two strategic strongholds: academia and programming.

As mentioned earlier, the Aletheia, a mathematical research agent developed by Google for Gemini 3 Deep Think, can independently solve open mathematical problems, self - iterate, and verify. The most crucial thing is that it knows when it makes mistakes and which problems it can't solve.

This "meta - cognitive" ability is an important sign of AI moving towards true intelligence.

Gemini 3 Deep Think doesn't "boost scores" by memorizing a large number of exercises. It really has the ability to understand the essence of problems and derive solutions.

It can handle new problems that it hasn't seen in the training data, and this ability is very close to the current human understanding of AGI.

Google also deliberately emphasized the practicality of Deep Think in its promotion.

Specifically, Google demonstrated how to use Deep Think to convert hand - drawn sketches into 3D printable files and how to help engineers model physical systems through code.

Academic ability is the high - ground of the "technological narrative" for AI companies.

An AI that can solve international Olympiad problems and participate in cutting - edge scientific research has higher credibility and authority.

At the same time, academic research is also a "testing ground" for AI capabilities. A model that can solve open mathematical problems today can better handle complex decision - making scenarios in enterprises with "no standard answers and incomplete data" tomorrow.

By investing in the academic field, Google is actually paving the way for future enterprise applications.

But Google's challenge to Anthropic doesn't stop there.

It is also focusing on cost - efficiency. Google claims to have reduced the unit cost of Gemini AI services by 78%.

The pricing of Gemini 3 Pro is $2 per million tokens for input and $12 for output, far lower than the cost of Claude Opus. For enterprises that need to deploy AI on a large scale, this cost difference may be a decisive factor.

Google has its own TPU chips, its own data centers, and its own cloud - service platform. This vertical integration ability is difficult for Anthropic to match.

Anthropic needs to rely on infrastructure such as AWS, Google Cloud, and in the future, Google TPU, while Google can optimize the entire chain from hardware to software, which has a natural advantage in cost control and performance tuning.

This surprise attack was very well - executed.

03 Launching a Surprise Attack on the Two Strongholds of Academia and Programming

The essence of this competition is not about whose model has a higher score but about who can define "how AI should work".

Anthropic focuses more on "context understanding" and "task execution".

It hopes that AI can be like an experienced employee, understanding complex business scenarios, remembering long - term work history, and executing multi - step task processes. The advantages of this approach are obvious: it can quickly bring about revenue growth and a soaring valuation.

Claude Code is the best proof.

When AI can directly help enterprises solve problems and create value, customers are willing to pay for it, and investors are willing to invest in it.

Google focuses more on "basic reasoning" and "generalization ability".

It hopes that AI can be like a smart graduate student, able to think independently, derive solutions, and verify the correctness of results when facing new problems.

This approach seems more "academic", but it may be more sustainable in the long run.

Because Google believes that true intelligence is not about memorizing how many code snippets but about understanding the essence of problems and deriving the logic of solutions.

Actually, I think these two paths are not mutually exclusive, but they represent different priorities and resource allocations.

In the short term, Anthropic's strategy is more effective. It has grasped the market's thirst for "actionable AI" and proved the value of AI with real products and application scenarios.

This "application - driven" approach can quickly obtain market feedback, iterate products, and build a moat.

But in the long term, Google's "academia + engineering" dual - drive may have more advantages.

Because as Google describes it, the ultimate form of AI should not just be a tool but an intelligent system capable of independent thinking and solving open - ended problems.

Of course, these two are not the only participants in this competition. Elon Musk replied to Anthropic's tweet announcing the financing, saying, "Anthropic will eventually become an institution that hates humans. This fate was sealed the moment you chose this name."

Anthropic originally means anthropology.

Besides just talking, Musk's xAI is also competing with Anthropic.

Just one day ago, xAI significantly adjusted its personnel structure, and several co - founders left.

In addition, OpenAI has also recently launched several new products related to science and AI programming, and the entire AI industry is accelerating.

This "arms - race" - style competition is both exciting and worrying.

What's exciting is that competition will accelerate technological progress. We consumers will soon have more powerful products to use.

What's worrying is that this competition may ignore safety and controllability.

Are we really ready to embrace "actionable AI"?

In the past, AI was just an intelligence living in a dialog box. Its mistakes at most wasted your time.

But when AI can access your file system, execute terminal commands, control your browser, and send emails, a single mistake of it may bring disastrous consequences.

This is why the right to define the "AI working mode" is so important.

It not only determines what AI can do and how it does it but also determines the relationship between AI and humans. Is it a master - servant relationship, a partnership, or something else?

The competition among AI giants is essentially a battle for the right to define the "AI working mode".

But I think ultimately, no one will lose, or rather, everyone will win.

Because future AI may need both Anthropic - style context understanding and task - execution abilities and Google - style theoretical reasoning and generalization abilities.

But before this convergence arrives, we will see more competition, more breakthroughs, and more chaos.

While Anthropic is counting money, Google is redrawing the battlefield. The war for the right to define the "AI working mode" has just begun.

This article is from the WeChat official account "Letter AI", author: Miao Zheng. Republished by 36Kr with permission.