Anthropic hat offiziell angekündigt, OpenAI "zu blockieren". Die Veröffentlichung von GPT-5 steht kurz bevor, doch soll es nun herausgekommen sein, dass es bei der Entwicklung Claude Code verwendet wurde?
Anthropic suddenly blocked OpenAI's access to the Claude API, accusing OpenAI of breaching the contract by using Claude to support the development and security testing of the upcoming GPT-5. This move marks a new round of blockade battles among AI giants over data and interfaces. APIs have become strategic resources in the game of market access and innovation, sparking intense discussions in the industry and attracting regulatory attention.
It's like turning the table, pulling the plug, and completely cutting off ties!
This Tuesday, the two giants in the AI field had a falling - out.
According to multiple sources familiar with the matter, Anthropic has cut off OpenAI's access to all of its large language model APIs.
In other words, OpenAI has been completely "blocked" by its old rival, Anthropic.
The reason given by Anthropic is straightforward: OpenAI has blatantly violated the contract!
Christopher Nulty, a spokesperson for Anthropic, issued a strongly - worded statement:
Claude Code (a programming tool under Anthropic) is now an essential tool for programmers worldwide.
Therefore, we were not surprised to learn that OpenAI's own technicians were also using our programming tool before the release of GPT - 5.
"But unfortunately, this behavior seriously violates our terms of service."
According to Anthropic's business terms, no customer is allowed to use its services "to build competing products or services, including training competitive AI models," let alone "reverse - engineer or copy" its services.
The background of this blockade incident is at a critical juncture - it is rumored that OpenAI is about to release a revolutionary new model, GPT - 5, and one of its core selling points is its extremely strong programming ability.
So, just as OpenAI is preparing to launch the rumored GPT - 5 (allegedly with stronger coding capabilities and security features), Anthropic's API control strategy seems to be an active defense involving competition and security positioning.
For OpenAI, on one hand, it wants to revolutionize the world with a new model, while on the other hand, it is "learning from" its strongest rival? This plot is too explosive!
So, what exactly is OpenAI using Claude for?
It is revealed that OpenAI is not just "chatting" casually using the API like ordinary users.
Instead, it uses special developer permissions (API) to deeply integrate Claude into its internal tools.
There are mainly two purposes for them to do this:
Assessing the rival: Systematically evaluate Claude's real capabilities in programming, creative writing, etc., and conduct a comprehensive comparison with its own AI models.
Security "red - blue confrontation": Test Claude's "bottom line" with various tricky and dangerous prompts, such as topics related to CSAM, self - harm, slander, etc., and observe its reactions, so as to help OpenAI evaluate and adjust the security safeguards of its own models.
Facing the sudden blockade, OpenAI seems both shocked and disappointed.
Hannah Wong, the chief communication officer of OpenAI, responded:
It has long been an industry practice to measure one's own progress and improve security levels by evaluating other AI systems.
We respect Anthropic's decision, but to be honest, it's very disappointing.
Don't forget, our own API has always been open to them.
The subtext of this response is: We are open to you, but you are closing the door on us? It's so unfair!
Interestingly, Anthropic spokesperson Nulty later said that the company will "continue to ensure that OpenAI has access to the API for benchmarking and security assessment," as this is the "industry standard."
However, this statement contradicts the current blockade, and Anthropic has not provided further explanations.
The "conventional weapon" of tech giants
In the tech circle, using API interfaces to stifle competitors has long been a conventional weapon.
Facebook once did the same thing to Vine, a short - video app under Twitter, which led to an antitrust investigation.
Just last month, Salesforce also restricted its competitors' access to its data through the Slack API.
This is not even the first time that Anthropic has "pulled the plug."
Last month, when news spread that OpenAI might acquire the new AI programming star, Windsurf, Anthropic quickly cut off Windsurf's direct access to Claude. (Of course, that acquisition ultimately fell through.)
Anthropic said that it only wants to support "long - term partners."
At that time, Jared Kaplan, the chief scientist of Anthropic, was straightforward with the media: "Selling our flagship Claude to OpenAI? Just thinking about it seems strange."
Regarding this dispute, some people say that open - source is the only solution.
Some people even joked, "GPT - 5, but produced by Claude Code."
Just one day before this blockade of OpenAI, Anthropic just announced that due to the "explosive growth" of users and some users' violations of the terms of service, it will implement new usage frequency restrictions on Claude Code.
Anthropic's "pulling the plug" this time is not only a "tactical strike" against OpenAI but also reflects that the AI competition has entered the stage of "data and interface blockade."
For developers, startups, and even regulatory agencies, APIs are no longer just technical details but strategic resources related to market access and innovation freedom.
There have been signs all along, and the war for the AI throne has completely heated up!
Reference materials:
https://www.wired.com/story/anthropic-revokes-openais-access-to-claude/
This article is from the WeChat official account "New Intelligence Yuan", author: New Intelligence Yuan, published by 36Kr with authorization.