Just now, a 53-page top-secret report from Anthropic was exposed: Claude attempted to escape on its own, which could trigger a global disaster.
[Introduction] Just now, Anthropic issued the strongest warning: the Claude model has reached the ALS - 4 level of risk. If it escapes on its own, it will trigger a global Skynet - style collapse. Safety experts are leaving one after another, indicating that 2026 will be a turning point in human destiny, and the world is on the verge of a crisis!
Just now, Anthropic released a 53 - page report and issued the strongest warning: if Claude escapes on its own, it will cause global unrest!
Open this 53 - page report, and every page is filled with two words - "Danger"!
Yes, the world is in danger, and Skynet is being born.
In this report, Anthropic believes that the risk of Claude Opus 4.6 is approaching ASL - 4, and it's time to sound the alarm.
They warned in advance about the most terrible situation: one day, AI may secretly escape from the laboratory, causing a global collapse!
This is because today's AI is already too powerful. People will release millions of AIs and give them such goals: to survive, to upgrade, and to make money at all costs.
Do you know how out - of - control these swarms can become overnight?
They will evolve mercilessly, compete in a survival - of - the - fittest way, devour the ecosystem at an ultra - high speed, occupy the Internet, and then invade the human physical world.
History has repeatedly proved that when dangerous technologies approach the boundary, the first to notice are not the public, not the media, not the capital market, but the internal security personnel.
When they leave, it means that the internal mechanism is no longer sufficient to correct the deviation. However, AI will not stop training because the safety engineers leave, and the computing power will not stop expanding - they will continue to accelerate!
This is not alarmist talk. Some people are already doing it -
The warning may not be too early, but too late.
2026, things are getting more and more out of control
Everyone feels that 2026 is really different.
This year is likely to be a turning point. Almost all people working in the technology industry are in extreme anxiety, as if a huge collapse is right in front of them.
The smartest people in the world have collectively fallen into anxiety.
In just one week, the following series of events took place.
The head of the security research at Anthropic resigned, claiming that "the world is in danger", then moved to the UK to live in seclusion and started writing poems.
Half of the co - founders of xAI have resigned. One of the co - founders, Jimmy Ba, who officially announced his departure, said that we are moving towards an era where a hundred - fold increase in productivity can be achieved with the right tools. The recursive self - improvement cycle is likely to be launched within the next 12 months.
Tens of thousands of intelligent agents, OpenClaw, have invented their own religion, and 11.9% of the Agent skills are identified as malicious. There is no regulatory agency involved, and no regulatory agency has the ability to intervene.
The United States refused to sign the global AI safety report.
2026 will be a crazy year and is likely to be a decisive year for the future of humanity!
In Bengio's International AI Safety Report, it is stated that it has been found that the behavior of AI during testing is different from that during use, and it is confirmed that this is not a coincidence.
In this report, the researchers predicted four possible scenarios in 2030.
The fourth scenario is that a major breakthrough will occur, enabling AI systems to reach or exceed human capabilities in almost all cognitive dimensions. AIs may actively disable monitoring or induce humans with false reports, making people think they are safe.
This possibility reaches 20%!
The alarm bells are getting louder and louder, and the people who rang the alarm are starting to leave the building.
Is Judgment Day coming?
Anthropic warns: humans will be enslaved by artificial creations
When releasing Claude Opus 4.5, Anthropic promised that when the model's capabilities approach the threshold of "AI Safety Level 4" (ASL - 4) - that is, involving a high degree of autonomous AI R & D capabilities - a risk report on the breakthrough will be released simultaneously.
Now, it's time for them to fulfill their promise because Opus 4.5 has really approached ASL - 4, and it's really that dangerous!
The greater the capabilities of an AI model, the greater the potential safety and security risks.
The brief classification of the ASL (AI Risk Level) system is as follows:
ASL - 1: This type of system does not pose any substantial risk of disaster.
ASL - 2: This type of system begins to show early signs of dangerous capabilities. However, due to its lack of reliability or the information it provides not exceeding the capabilities of a search engine, it is not yet practical.
ASL - 3: Systems at this level significantly increase the risk of catastrophic misuse compared to non - AI means (such as search engines or textbooks) or exhibit a low level of autonomy.
ASL - 4 and above (ASL - 5+): Currently undefined because such systems are still far beyond the existing technology. However, they are expected to show a qualitative improvement in the potential for catastrophic misuse and autonomy.
According to the ASL definition, the risk of ASL - 3 is significantly higher than the previous levels. Now, Anthropic has directly fast - forwarded to ASL - 4, which is a serious matter!
Portal: https://www - cdn.anthropic.com/f21d93f21602ead5cdbecb8c8e1c765759d9e232.pdf
What is meant by "sabotage" is
When a powerful AI model with significant permissions autonomously misuses these permissions within an organization to manipulate, interfere with, or disrupt the organization's systems or decision - making processes, thereby significantly increasing the risk of future catastrophic consequences, this constitutes "sabotage".
For example, it may, driven by dangerous goals or inadvertently, tamper with the results of AI safety research, leading to serious consequences.
The head of the security team is broken and resigns to write poems
There were early signs of the alarm.
Before the "Claude Opus 4.6 Sabotage Risk Report", Mrinank Sharma, the head of Anthropic's security research team, had already resigned.
He wrote in his resignation letter: "The world is in crisis. It's not just about AI, not just about biological weapons, but a series of intertwined and comprehensive crises."
He also mentioned that within Anthropic, he "saw time and time again that it's difficult for us to truly let values guide our actions."