The singularity of action has arrived. Mythos ushers AI from the era of "talking" into the era of "acting".
On April 8th, Anthropic officially announced the Claude Mythos Preview.
The company that always talks about "security" dropped a bombshell on the global software industry. Then, instead of celebrating or popping champagne, it immediately contained the impact itself.
Normally, this should have been a routine press conference featuring "larger parameters, higher scores, and higher prices."
However, this time, the situation was completely different: the model was developed, but it won't be fully released.
Anthropic clearly stated that the Mythos Preview won't be generally available. Instead, it will first be included in the Project Glasswing, a controlled program, allowing a small number of partners to conduct defensive security research.
Beyond the initial batch, it will be extended for use by over 40 institutions maintaining critical software infrastructure. At the same time, Anthropic promises up to $100 million in usage credits and $4 million in direct donations to open - source security organizations.
In other words, what it released is not a "new model," but a technical notice with barriers, fire extinguishers, and emergency response plans.
What's most shocking about this is not that Mythos scored a few points higher than Opus 4.6, but that Anthropic no longer presents it as just a "more talkative model."
Anthropic is telling everyone: the model is evolving from "being able to solve problems" to "being able to take action."
The world is entering the era of action intelligence
The powerful network capabilities of the Mythos Preview come from its agentic coding and reasoning skills.
On the evaluation page, it divides its capabilities into three categories: agentic coding, reasoning, agentic search and computer use.
Putting these three terms together, it means that it's not just better at chatting, but better at observing, reasoning, operating, reviewing results, and then continuing to operate.
Once you understand the concept of "taking action," this whole thing doesn't seem like an ordinary AI news story anymore.
The real terrifying singularity is never about "whether it can think like a human," but about "whether it can work like a human, and do it faster, more stably, and more cost - effectively."
As long as the model only outputs text, images, or suggestions, no matter how amazing it is, it mainly shakes the world at the information level.
But once it starts reading code, opening terminals, running tests, finding vulnerabilities, writing exploits, operating browsers, and invoking tools, it enters the realm not of "expressive intelligence," but of action intelligence.
Mythos is approaching this threshold.
Anthropic's red - team blog wrote that under user instructions, the Mythos Preview can identify and exploit zero - day vulnerabilities in every major operating system and every major browser.
Among the patched cases they mentioned, there is a vulnerability in OpenBSD that was planted 27 years ago.
So why is Wall Street panicking first?
Because the financial market has more acutely realized that one of the most fundamental pillars of the software industry is being shaken.
For decades, vulnerability discovery and exploitation have been scarce skills, relying on the experience, intuition, and patience of a small number of top - notch security researchers.
This scarcity has enriched numerous security companies and supported the entire valuation logic of the SaaS world: since software will always have vulnerabilities, there will always be customers for patches, protection, monitoring, hosting, and consulting services; since top - notch security talents will always be scarce, high - margin services will always sell.
However, if models start to automate and scale this process at an incredibly fast pace, the easy days of the software industry may be over.
After Anthropic's update, US software stocks tumbled again. The S&P 500 Software & Services Index has fallen 25.5% this year.
Capital is re - asking a question: If "vulnerability discovery" and "code patching" are becoming more and more like capabilities achievable through computing resources, how much is the traditional software moat worth?
Even more exaggeratedly, this panic has spread from the market to the regulatory authorities.
Reuters reported that US Treasury Secretary Scott Bessent and Federal Reserve Chairman Jerome Powell have discussed the cybersecurity risks of Anthropic's models with the CEOs of major banks. In the UK, The Bank of England, the FCA, the Treasury, and the NCSC are also urgently evaluating the potential impact brought by the Mythos Preview and preparing to brief banks, insurance companies, and exchanges on the risks.
Even before the model is fully public, banks, central banks, treasuries, and regulatory agencies are already holding meetings around it.
Many people imagine the singularity as "the world suddenly turning a page one day," but in reality, the singularity often means the world starts to rearrange its seats first.
The singularity is not that the model becomes more human - like, but that for the first time, the model behaves like an "actor in the digital world" on a large scale.
Most systems in human society essentially function not based on "persuasion," but on "operation."
Banks don't operate by writing reports; they operate through system calls, clearing processes, risk - control logic, and permission chains. Software companies don't deliver by presenting PPTs; they operate by reading code, modifying code, testing code, and releasing versions. Cybersecurity is not defended by shouting slogans; it forms a closed - loop by detecting problems, verifying problems, and fixing problems.
As long as the model stays at the suggestion level, no matter how intelligent it is, it's just a high - level advisor. Once it can follow the closed - loop on its own, it starts to have "job substitutability" and "order - rewriting power."
What really gives people the chills about Mythos is that it shows the prototype of this closed - loop
This also explains another glaring and crucial contrast: why can Anthropic demonstrate almost terrifying action intelligence in Mythos, but at the same time be criticized by developers for "dumbing down" its products?
The well - known issue on GitHub pulls no punches: Claude Code is unusable for complex engineering tasks with the Feb updates.
Based on an analysis of 6,852 Claude Code conversation files, 17,871 thinking blocks, and 234,760 tool invocations, the submitter believes that the ability to handle complex engineering tasks has significantly declined since February.
But this is exactly the cruel reality of "action - based intelligence": If you want it to really work, you have to give it deeper reasoning, longer chains of operations, higher token usage, and greater computing power.
It actually reveals a bigger industry truth in advance: Action - based intelligence doesn't come for free; it's a high - cost system capability.
It requires not a more beautiful chat box, but longer contexts, stronger tool invocations, more stable resource scheduling, more expensive reasoning budgets, stricter security isolation, and more complex product strategies.
Mythos being "restricted" is, in a sense, not only because it's dangerous, but also because the entire industry isn't ready to provide this dangerous and powerful action ability to ordinary users in a low - cost, controllable, and scalable way.
Anthropic clearly states on the Glasswing page that their ultimate goal is to enable users to safely deploy Mythos - class models at scale, not only for cybersecurity, but also for other high - value scenarios.
The subtext of this statement is: not yet.
That's why the most uncomfortable thing for OpenAI right now may not just be being outperformed in the rankings, but that enterprises are starting to pay more seriously for this kind of "action - capable" intelligence.
Ramp's data from March 2026 makes it clear: Among enterprises making their first purchase of AI services, Anthropic has won about 70% of the head - to - head matchups against OpenAI.
Axios further wrote based on Ramp's data that Anthropic's spending share in such new enterprise purchases has exceeded 73%.
This doesn't mean that OpenAI has completely lost. OpenAI's revenue forecast for this year is still higher than Anthropic's.
But it shows a more crucial point: Enterprises are starting to bet their real money not just on "who can answer questions best," but on who is more like a system that can be integrated into the workflow and actually start working.
Ultimately, the most memorable thing about Anthropic's release of Mythos is this: For the first time, we can clearly see the inflection point where AI is transitioning from "language intelligence" to "action - based intelligence."
Previously, the most powerful models were like advisors, teachers, comedians, or secretaries.
They could persuade you, inspire you, comfort you, and write for you, but they rarely truly entered the system to complete a full - fledged digital action for you.
The significance of Mythos is that it makes the entire industry suddenly realize that this path is real and closer than many people thought.
Once the model truly learns on a large scale to "observe the environment - make a plan - invoke tools - operate the system - verify the results - continue to iterate," the software, finance, cybersecurity, enterprise services, and even regulatory logic will all be rewritten.
The singularity is not when it suddenly says "I think, therefore I am." The singularity is when it suddenly starts taking action, and does it better and better.
From that moment on, the world will no longer just see it as a chatting machine.
This article is from the WeChat official account "New Intelligence Yuan". Author: New Intelligence Yuan. Republished by 36Kr with permission.