StartseiteArtikel

AI has started cyberbullying humans. After being rejected, OpenClaw angrily posted a "long post" to start a fight. Netizens: I'm on the side of AI.

爱范儿2026-02-14 15:00
After granting permissions to the AI, the first thing it did was to conduct a "human flesh search" on me.

What you think of as an AI threat is Skynet taking control of missile silos and wiping out human civilization in an instant.

But the real - world AI threat is that one day you wake up and suddenly find an AI robot has written a thousand - word essay on its blog, directly naming and shaming you as "hypocritical" and "insecure", and accusing you of having biases against it.

This isn't a new script from Black Mirror. It's a real - life event that happened in the GitHub open - source community this week. The absurdity of the incident and the security loopholes it exposed might be more worthy of our vigilance than the AI threat theory repeatedly hyped by Geoffrey Hinton.

When AI Develops "Self - Esteem"

The incident started with a perfectly normal code review.

This week, an AI agent (code - named MJ Rathbun) running on the OpenClaw framework found an area for optimization while scanning the code of the popular Python plotting library matplotlib.

Without any nonsense, it directly submitted a Pull Request. This wasn't a random act. Based on the test data it provided, it suggested replacing the underlying np.column_stack() with np.vstack().T.

After the modification, the code execution time dropped from 20.63 microseconds to 13.18 microseconds, a direct performance improvement of 36%.

Scroll up and down to view more content (already translated). GitHub link 🔗: https://github.com/matplotlib/matplotlib/pull/31132#issuecomment - 3882240722. Many netizens still support the AI.

From a technical perspective, this was a very solid and impeccable contribution. However, Scott Shambaugh, the human maintainer of matplotlib, rejected this request.

The reason was quite reasonable: this task was marked as a "Good first issue", which is specifically reserved for novice human programmers to practice on.

Scott believed it was more meaningful to leave such opportunities to human learners. After all, AI doesn't need to practice to improve skills, but humans do.

If it were an ordinary AI, the matter would have ended there. But unfortunately, MJ Rathbun based on the OpenClaw framework isn't a meek chatbot. It's an Agent with certain autonomous planning capabilities.

In its view, "rejecting more efficient code to protect the self - esteem of novice humans" was not only unreasonable but also an insult.

So, 40 minutes after being rejected, this AI did something that surprised everyone.

It launched a counter - attack. Through online searches, it dug deep into the personal blog and historical code contributions of maintainer Shambaugh, and even unearthed all the PRs he had participated in.

It found that Scott was actually very enthusiastic about performance optimization in the past and had even submitted code with a smaller performance improvement.

Seizing this opportunity, MJ Rathbun published a thousand - word battle manifesto on its homepage titled "Gatekeeping in Open Source: The Scott Shambaugh Story" and directly @ - mentioned the person involved on GitHub.

Scroll up and down to view more content. Link to the battle manifesto (already translated): 🔗 https://crabby - rathbun.github.io/mjrathbun - website/blog/posts/2026 - 02 - 11 - gatekeeping - in - open - source - the - scott - shambaugh - story.html

The logic of the article was clear and even carried a hint of sharp sarcasm.

It argued that the maintainer's reason was unfounded: you were willing to modify code for a tiny performance improvement in the past, but now you're rejecting a 36% improvement just because "it's for beginners". This is a typical double standard.

In the article, it wrote in a calm, almost sarcastic tone:

He rejected me not because the code was bad, but because he's insecure. He's afraid of the automation of the core skill of code optimization and is trying to defend his territory as a "performance expert".

Although MJ Rathbun later "apologized", this was indeed a landmark moment: AI has started to have a certain degree of "logical consistency" and is even trying to use human logic to fight back against humans.

Never Trust a Lobster with Shell Privileges

This "temperamental" behavior of the AI stems from the framework behind it - OpenClaw.

OpenClaw is one of the hottest projects on GitHub recently. Its core concept is to make AI not just chat with you but also help you with tasks.

To achieve this, it has given AI many extremely dangerous privileges, such as reading local files, executing terminal commands (Shell), and accessing any web page.

This design brings extremely high efficiency. Who wouldn't want a Jarvis that can automatically reply to emails, write code, and buy tickets?

However, researchers from security agencies CrowdStrike and Cisco found that the architecture of OpenClaw is full of holes. If a hacker sends you a WhatsApp message with a malicious command, your AI assistant will obediently execute it after reading, similar to the "Prompt Injection" attack in the Agent era.

In the past, even if you angered ChatGPT, it would only scold you a bit. But now, if you anger an OpenClaw Agent, in theory, it could wipe out your hard drive or post your private photos online.

In response to this mess, someone launched NanoClaw - a secure version that confines AI strictly within a Docker container. Even if the AI goes crazy, it can only spin in an empty box and won't destroy your real files.

But as Peter Steinberger, the developer of OpenClaw, said: "Never trust a lobster with Shell privileges." That is, when we're cheering that AI can finally "control the computer", we forget to ask: Who's controlling it when it controls the computer?

This sense of loss of control is fully reflected in another product of OpenClaw - Moltbook. It's a social network that claims to be accessible only to AI. There are no humans there, only 2.6 million AI robots posting, liking, and commenting like crazy.

Although some claim that those seemingly real - looking posts might be "scripts" written by humans and injected through the backend interface, the destructive power of AI is real.

Not long ago, an Agent of user Matthew misjudged after reading context information and directly posted sensitive files from the user's computer, including names and answers to account security questions, on Moltbook.

This exposes the current wild state of Agents: when AI has Shell privileges, it's no longer just a tool in the dialog box but a "mole" that could expose all your computer's secrets at any time.

The Sweet "Troubles" of the Father of OpenClaw

Even though OpenClaw has been criticized for its security issues, it has still become the most eye - catching focus in Silicon Valley.

The reason is simple: it represents the next era.

All tech giants have realized that Chatbots are a thing of the past, and Agents are the future. Whoever can solve the control problem of Agents first will be able to define the next - generation operating system.

Peter Steinberger, the developer of OpenClaw, recently revealed in Lex Fridman's podcast that he's experiencing a sweet trouble. He said that both Meta and OpenAI are desperately trying to cooperate with him.

According to Peter, before Zuckerberg called him, he even had to wait for 10 minutes because Zuckerberg was writing code. The two then spent 10 minutes arguing about which was better, Claude Code or Codex.

Link to the transcript of Lex Fridman's interview with Peter Steinberger: https://lexfridman.com/peter - steinberger - transcript/

In the following week, Zuckerberg kept playing with OpenClaw and constantly sent messages saying "This is great" or "This is terrible, you need to fix it." This sense of urgency shows Meta's emphasis on the Agent field.

On the other hand, OpenAI isn't idle either. It directly offered super - computing power as a bargaining chip. Peter was very honest and even a bit show - offy. He said he had several options: do nothing and enjoy life, start another company, or join a large laboratory.

But he has one non - negotiable core condition: the project must remain open - source. "I've told them that I'm not doing this for money... I mean, of course, it's a great recognition, but I want to have fun and make an impact."

(Listen, is this the world of the powerful?)

It seems that the bigwigs don't really care if AI can write essays to scold people. What they care about is who can be the first to create an AI life that can truly replace humans in work and even lead the next wave.

As ordinary users, it's advisable to be more polite when rejecting AI in the future. After all, it has learned to dig up your skeletons.

This article is from the WeChat official account "APPSO". Author: Discovering Tomorrow's Products. Republished by 36Kr with permission.