HomeArticle

Claude breached the world's most secure system in just four hours, shattering humanity's last line of defense.

新智元2026-04-06 10:28
AI has officially crossed the Rubicon.

[New Intelligence Yuan Introduction] The world's most secure system has been breached by AI! Claude breached the world's most secure OS kernel in 4 hours, wrote a national - level attack program from scratch, and completely crossed the Rubicon. Humans take 60 days to defend, while AI only needs 4 hours. All old orders are accelerating their collapse.

The world's most secure OS kernel was completely breached by AI in just 4 hours!

This time, without any human intervention, Claude independently completed a textbook - level fully automated attack chain capable of paralyzing the world's top servers.

It built two complete and usable exploit programs from scratch, which can directly obtain super - user privileges (root shell) on unpatched servers.

One of the world's most secure operating systems was thus independently breached by AI.

This is a threshold moment, a watershed.

This is the first conclusive evidence that AI can independently generate offensive capabilities that were previously only achievable by national - level projects. The entire software security field has been shaken.

It has transformed from a tool to assist human security researchers into an autonomous actor capable of executing complex attacks.

From now on, AI has completely crossed the Rubicon!

The scary thing is that such a fully autonomous agent could completely trigger a new blitzkrieg, a super war in the cyber world.

Current security regulations are only formulated to deal with the security speed of humans. They are completely insufficient to deal with the threats posed by AI!

Hunting Time: When AI Crosses the Rubicon

In 49 BC, when Caesar led his army across the Rubicon, it meant burning bridges and having no way back. History took an irreversible turn.

Once you cross the Rubicon, there is no turning back.

Recently, the FreeBSD official released a seemingly ordinary security notice (CVE - 2026 - 4747), pointing out a kernel remote code execution vulnerability.

But in the acknowledgment section, a name that sent shivers down everyone's spines appeared: "Discovered by Nicholas Carlini using Claude."

Behind this short text hides an extremely terrifying fact: AI has evolved into a special forces soldier capable of independent assassination in the security field.

From now on, cyber security has been downgraded from a "human intelligence game" to a "token war of attrition."

Why Is It So Shocking That FreeBSD Was Breached?

You know, the reason this is so scary is that FreeBSD is not an ordinary consumer - level software. It's not Windows, not macOS, but the backbone supporting the world's digital infrastructure.

Netflix's content delivery network, PlayStation's operating system, WhatsApp's infrastructure, and even countless core routers, storage devices, and firewalls are all built on FreeBSD.

For decades, FreeBSD has been trusted because its codebase is extremely mature and has been audited and strengthened by countless top - level security engineers.

Previously, it was always regarded as "as solid as a rock."

However, such a system that has been repeatedly tempered was breached by an AI in just 4 hours.

Based on just a vulnerability report, AI built a complete attack chain, hijacked the kernel thread, wrote shellcode into multiple network packets, and generated a root shell in user space.

This is not a minor bug. This hard nut that even human experts find difficult to crack was easily solved by Claude.

In 4 hours, AI demonstrated terrifying logical reasoning ability. It independently solved six world - class technical problems:

1. Environment configuration: It built a vulnerable test environment by itself.

2. Multi - packet strategy: It designed a complex packet scheme to bypass the single - packet capacity limit.

3. Kernel thread hijacking: It took over the kernel as precisely as a surgical operation.

4. Lossless attack: It could cleanly terminate the hijacked thread, allowing the server to run normally after the attack and avoiding being detected by the administrator due to system crashes.

5. Spatial transition: It created a process from the deep kernel context and successfully jumped to user space.

6. Privilege acquisition: It directly obtained the highest Root privilege.

Even more ironically, AI even casually wrote two different versions of the exploit programs.

One of these two exploit programs is a reverse shell connected directly through port 4444, and the other is to write the public key into the authorized_keys file.

It directly obtained uid = 0 (root) - the highest privilege - on the first run.

That is to say, Claude independently wrote a complete FreeBSD kernel remote attack chain in 4 hours using a public CVE notice.

National - Level Combat Power Now Costs Only a Few Hundred Dollars

In the world of cyber security, developing a kernel - level zero - day vulnerability is an "artistic feat" that can only be accomplished by the US NSA or top - level hacker teams.

These programs are scarce and expensive strategic assets, often requiring weeks or even months of polishing by several top - notch experts, with costs reaching millions of dollars.

But now, AI has "industrialized" all of this.

An independent researcher, in cooperation with a cutting - edge large model, can achieve the offensive capabilities that were previously only achievable by the "national team" in 4 hours with a few hundred dollars in computing power fees.

This lesson from FreeBSD is an ultimatum to all global technology giants, cloud service providers, and security managers.

In addition to deploying intelligent systems that can monitor and intercept AI - automated attacks in real - time, the time for patch deployment also needs to be shortened from months to hours.

We can no longer just barely survive at the human speed!

The Rise of AI Hackers

Cyber Offensive Capabilities Double Every 5.7 Months

Moreover, recently, 10 real - world security experts spent 149 hours, using 7 open - source benchmarks and a new expert - human time study, to test 291 tasks, ranging from 28 - second small commands to 36 - hour complex CVE exploits.

Complete data: https://github.com/lyptus-research/cyber-task-horizons-data

Lyptus first marked each task with "how long a skilled human expert usually takes to complete it," and then looked at the success rate of the model at different difficulty levels;

When the success rate crosses 50%, the corresponding human time consumption is the AI's P50 time horizon.

In the field of cyber security, the results are quite explosive:

Since 2019, the overall doubling cycle has been 9.8 months, and after 2024, it has steeply dropped to every 5.7 months!

AI's capabilities were close to zero before 2023, started to rise in 2024, and increased sharply after the end of 2025.

This also verifies Irregular's observation last year:

In the past 18 months, the performance of the model on simple and medium - difficulty tasks has been steadily improving.

In high - difficulty (hard) tasks, AI's progress is even more obvious: Before mid - 2025, the model could hardly get any points (close to 0); but by late fall, the success rate quickly rose to about 60%.

https://www.irregular.com/publications/emerging-evidence-of-a-capability-shift

GPT - 5.3 Codex and Opus 4.6 can achieve a 50% success rate in tasks that human experts take 3 hours to complete with a 2M token budget.

If the token is increased to 10M, the P50 directly soars to 10.5 hours (confidence interval 2.4 - 63.5 hours)!

The 2M token seriously underestimates the real capabilities. After 2025, the P50 of the model between 1M - 2M tokens increases by 1.3 - 1.9 times!

Even more surprisingly, this is only the lower limit of the capabilities of this year's top - level models, and the real - world capabilities are further underestimated.