To compete for the commanding heights of AI, there is bound to be a battle between Google and Anthropic.
Anthropic, the company that just launched Cowork, is seeking another round of financing. People familiar with the matter revealed that the company is finalizing a massive $25 billion financing round, just over two months after its last funding.
Why does Anthropic need funds so urgently?
The reason is simple. In the competition among major AI companies in 2026, it's no longer about model parameters and benchmark scores. Developer experience and Agent capabilities have become the new focus.
For example, Tang Jie, the chief scientist of Zhipu, previously stated that with the emergence of DeepSeek, the exploration of the Chat paradigm is basically over, so the company decided to bet on coding.
Whoever can win over programmers will win the battle.
Anthropic is indeed very powerful. Data from Y Combinator in 2026 shows that Claude Code has a market share of up to 52%, outperforming all its competitors.
Moreover, they only need four programmers and a development cycle of just 10 days to come up with a mature and fully encapsulated product like Cowork, undoubtedly securing its leading position.
However, Google won't sit back and watch. This AI giant is about to go head - to - head with Anthropic in the Vibe Coding field.
Interestingly, this time Google is not playing the role of a hegemon but a challenger.
01
Actually, Google didn't release a similar Agent programming tool called Antigravity until the end of 2025, while Claude Code was launched in early 2025.
This is why Google is just a challenger in the field of AI programming.
Google officially defines Antigravity as an Agent - First IDE (Agent - first integrated development environment), emphasizing that the Agent is the core design concept rather than an auxiliary function of a traditional IDE.
The built - in Mission Control panel allows developers to manage multiple autonomously running Agents. These Agents can process tasks in parallel, browse the web to find documents, and even execute commands in the terminal.
In terms of functionality, Antigravity tries to differentiate itself through multi - task parallelism, but the core logic is still to let AI agents complete programming tasks for developers.
However, Antigravity not only natively supports Google's own Gemini 3 Pro, Deep Think, and Flash models but also Anthropic's Claude Sonnet 4.5 and Opus 4.5.
The logic behind this decision is straightforward. Even if developers prefer Claude's models, Google wants to ensure they stay in Google's IDE ecosystem.
Then again, this strategy is also a compromise.
As a major AI company, it was unimaginable for Google in the past to support a competitor's model in its own development tool.
But the reality is that the Claude model is smarter, better at task orchestration, and more developer - friendly. Developers are loyal to the Claude model but not yet to the IDE.
Just like when ordering McDonald's takeout, Meituan and Taobao Flash Delivery differ in terms of discounts and order - dispatch times. Similarly, in the development process, different IDEs can produce different results even with the same model and input requirements.
However, market feedback shows that Antigravity's performance is not as good as Google expected.
A report from CB Insights shows that in the statistics of the AI programming tool market share at the end of 2025, the adoption rate of Antigravity was far lower than that of established tools like Cursor and GitHub Copilot.
Discussions in the developer community also confirm this. Most people think that although Antigravity's multi - Agent management function is novel, it is not as efficient as a single powerful Agent in practical use.
However, Antigravity is not without its highlights. Linus Torvalds, the father of Linux, previously launched the AudioNoise project and tried to use Antigravity for Vibe Coding, such as using AI to generate Python code to assist with audio processing.
Linus admitted in the project's README file that he is not familiar with Python. In the past, he usually completed Python programming by searching and copy - pasting, but now he skips this intermediate step and directly lets Antigravity generate the code.
Linus's endorsement did bring some attention to Antigravity, but this case also exposes Antigravity's positioning problem.
AudioNoise is just an amateur project. What Torvalds generated with AI is only the Python visualization tool part that he is not good at, and the core C - language audio processing logic is still written by him.
In other words, even a top - level programmer like Linus only uses Antigravity as an auxiliary tool to handle peripheral tasks he is not familiar with.
This exactly illustrates the dilemma Google is facing.
Claude Code can handle complex programming tasks for developers, while Antigravity is still at the level of an auxiliary tool.
02
Google is not without advantages.
In fact, in terms of resources and technological accumulation, Google has full - stack capabilities that Anthropic can't match. From chips to cloud services, from model training to application deployment, Google can optimize every link.
But the problem is that this full - stack advantage has not translated into actual competitiveness in the niche market of programming tools.
Cowork, mentioned earlier, is a good example.
In January 2026, Anthropic launched Cowork, a desktop application feature that allows Claude to directly access folders on the user's computer and perform complex multi - step tasks.
The launch of Cowork further expands the application scenarios of Claude, extending from a pure programming tool to a broader knowledge - work field.
During the development process, the Anthropic team mainly relied on Claude Code to generate the code for Cowork. In other words, Anthropic has achieved AI building AI. If it can launch Cowork today, it may launch more powerful products tomorrow.
In contrast, Google's actions at the application layer seem slow.
Google does have a product similar to Cowork called Gemini CLI. This tool is similar to Cowork in function and can access local files and perform tasks.
But the problem is that Gemini CLI was not developed with Antigravity and can only be operated through the command line. It can't provide a graphical interface like Cowork for users to interact intuitively.
In terms of AI programming tools, Anthropic has positioned Claude Code as a collaborative partner for developers from the start, emphasizing the ability to handle long - term and complex tasks. In contrast, Google's Antigravity is more like a feature collection, trying to attract users by piling up features.
The presence of OpenAI in this field cannot be ignored. Although OpenAI's Codex and GitHub Copilot have a smaller market share than Claude Code, they have a unique advantage, which is their cooperation with Microsoft. Deeply integrating with the GitHub ecosystem embeds AI programming capabilities directly into the platform most commonly used by developers.
This strategy is not as radical as Anthropic's but is more stable.
Although we've pointed out many of Google's shortcomings, Google also has its trump card: the advantage in the hardware infrastructure layer is gradually expanding. This is particularly evident in Google's chip cooperation with Anthropic.
While competing fiercely at the application layer, Google and Anthropic maintain a close cooperative relationship at the infrastructure layer. At the end of 2025, Anthropic announced that it would directly purchase nearly one million Google TPU v7 chips, codenamed Ironwood.
According to an analysis by SemiAnalysis, this deal is worth $42 billion and is expected to provide Anthropic with a computing power capacity of over 1GW.
Anthropic chose TPU over NVIDIA GPUs mainly for economic and technological reasons.
Compared with NVIDIA's GB200 servers, the total cost of ownership of a TPU v7 cluster is reduced by about 30 - 44%.
In terms of performance, the TPU v7 has nearly 10 times the performance of its predecessor, the TPU v5p. Each TPU v7 chip can provide 4.6 petaFLOPS of FP8 computing power, which is basically on par with NVIDIA's B200 and even slightly higher.
In terms of energy efficiency, the TPU v7 performs even better, with a power consumption of about 600W, far lower than the power consumption of NVIDIA GPUs, which often exceeds 1000W.
The most crucial factor is ecological compatibility.
Claude 4.5 Opus is a model trained by Anthropic using a diversified computing power strategy and can run on Amazon Trainium, NVIDIA GPUs, and Google TPUs.
The TPU is responsible for computing power supplementation and specific algorithm optimization here.
The foundation of all large models is the Transformer architecture, and the TPU's Systolic Array architecture is designed for large - scale matrix operations in Transformer layers. Compared with GPUs, it can improve matrix calculation efficiency by 30 - 80% under the same power consumption.
In terms of inference, Anthropic can optimize the TPU to dynamically activate expert layers during inference, reducing invalid calculations and latency. In contrast, each inference on a GPU requires memory read - write operations, which wastes a lot of power.
In other words, if Anthropic wants to use its models more cost - effectively, it needs TPUs rather than GPUs.
From a supply - chain perspective, this deal also has profound strategic significance.
The supply shortage of NVIDIA GPUs has lasted for two years. Even large customers like OpenAI and Meta often face delivery delays. By cooperating with Google, Anthropic has broken its unilateral dependence on the NVIDIA ecosystem and obtained Google's supply guarantee.
For Google, this deal is also of great significance. Google's TPU business has mainly served its own AI training needs for a long time, with relatively few external customers.
Anthropic's order for one million chips not only brings considerable revenue but more importantly, validates the competitiveness of TPU in the commercial market. If Anthropic can successfully train a more powerful Claude model using TPUs, it will be the best endorsement for TPU technology.
For Google, this is a strategy of making up for losses in one area with gains in another.
Even if Gemini fails to defeat Claude at the application layer, Google will still profit from Claude's increasing dependence on TPUs. The sales of TPU chips, cloud - service rentals, and technology - support fees are all real sources of income. Moreover, as Anthropic's dependence on TPUs increases, Google's say in this relationship will gradually strengthen.
But this strategy also has risks. If Anthropic uses Google's TPUs to train a model far superior to Gemini, Google's competition at the application layer will become even more difficult. At that time, Google may face an embarrassing situation where its infrastructure is supporting the success of a competitor.
03
If technology and hardware are the weapons of war, then capital is the fuel for this war.
At the capital level, Anthropic has demonstrated amazing financing capabilities.
At the beginning of 2026, Anthropic is expected to complete a financing round of over $25 billion, which will raise its valuation to $350 billion.
It's worth noting that in March 2024, Anthropic's valuation was only $61.5 billion. By November 2025, this figure had risen to $183 billion. Just a few months later, the valuation doubled again, approaching that of OpenAI.
What supports this $350 billion valuation is real revenue growth.
Anthropic clearly stated in its Series F financing announcement that its annualized revenue in 2025 was about $1 billion, and it is expected to reach $15.2 billion in 2026.
This 15 - fold growth rate is extremely impressive. More importantly, this growth is not achieved through money - burning subsidies but from real payments for APIs and subscriptions.
Investors no longer see Anthropic as a research institution that needs continuous funding but as a technology company with a clear business model and the ability to generate its own revenue.
The success of Claude Code is the best proof of this transformation. Through Cowork, it has proved that Claude Code is not only a technological product but also a commercial product that can bring stable revenue.
This round of financing is led by Coatue Management and the Singaporean sovereign wealth fund GIC.
An interesting thing happened during this process. Sequoia Capital invested in OpenAI, xAI, and Anthropic at the same time. This approach of investing in the entire track is not common in the history of venture capital.
This change at Sequoia occurred after a leadership change. Roelof Botha, the former head of Sequoia, was cautious about concentrated investments in a few high - valuation startups. He once publicly stated that pouring more money into Silicon Valley would not create more great companies.
But after Roelof stepped down in November 2025, the new leadership clearly adopted a different strategy.
Now, Sequoia believes that since the financing scale of Anthropic is so large, it has evolved from a venture - capital investment to a stock - investment. Sequoia holds a large number of shares in OpenAI and xAI and also has high hopes for Anthropic because this is not an all - or - nothing competition. These companies will each have unique capabilities.
In the AGI era, there may be more than one winner. Different AI companies will dominate different niche markets, rather than having one company monopolize the market like in the traditional Internet era.
From this perspective, Sequoia's all - track investment is a rational risk - hedging strategy.
It's no exaggeration to say that in the AI field, capital itself is a moat. Without a multi - billion - dollar entry ticket, it's impossible to stay in the game.
The cost of training a cutting - edge large model has reached hundreds of millions of dollars, and to maintain continuous iteration and optimization, billions of dollars may need to be invested annually. This capital threshold excludes most players, and only companies that can continuously obtain large - scale financing can continue to compete.
Microsoft and NVIDIA have promised to invest about $15 billion in this round of financing.
Anthropic also said that after getting the financing, a large part of the funds will be used to purchase computing power. If Anthropic purchases computing power from Microsoft Azure and uses NVIDIA GPUs, it will become a circular process, allowing cloud providers and chip giants to lock in future revenue through investment.
Actually, this kind of capital cycle is already the norm in the AI industry. Large technology companies are both investors and suppliers of AI startups.
The benefits of this dual role are obvious. As investors, they can get equity returns, and as suppliers, they can get stable revenue. A large part of the financing of AI startups will eventually flow back to these large technology companies.
Anthropic also revealed that the company has hired the well - known law firm Wilson Sonsini to prepare for an IPO, which is expected to take place between 2026 and 2027. If this plan comes true, Anthropic may go public earlier than OpenAI. This will be a landmark event, indicating that the AI industry is entering a mature stage, shifting from a pure technology competition to commercialization and capitalization.
In contrast, although Google has money, it can't be as focused as Anthropic in the field of AI programming.
Google is a giant with a market value of over $4 trillion, and AI is just one of its many businesses. Google's main source of income is advertising, so Google's attitude towards AI is to use it to strengthen its existing advertising - based business.
This diversification is both an advantage and a disadvantage. The advantage is that Google has enough funds and resources for long