Lobster Storm: How the Man Who Builds AI with AI Became the King of GitHub
Prologue: San Francisco doesn't believe in tears, only in being co - opted.
February 15, 2026, San Francisco.
Sam Altman, the boss of OpenAI, casually typed a line on the social platform: "Peter Steinberg has joined us to develop the next - generation personal agents."
As soon as the news came out, the entire AI circle was in an uproar. It's like a freelance coder who set up a barbecue stall at the village entrance, relying on his unique skills, and finally being recruited by the governor's mansion that offers a full - fledged feast.
At this time, in an apartment in Vienna, Peter was busy packing his luggage. Three weeks ago, he was a retired old coder struggling with "job burnout" every day; three weeks later, he had to go to California to work for the company from which he had borrowed countless interfaces.
Peter left a tough message on his blog: "My next goal is to create an Agent that even my mom can use."
The story begins with that "frustrating" weekend three months ago.
Chapter 1: The "Morning Grump" of a Retired Big Shot
What did Peter do before?
In 2011, he founded PSPDFKit, specializing in PDF tools. This company is known for being "hard - core": no financing, no burning of money, just relying on technology. In the end, its code was used on 1 billion devices. In 2021, big capital took a fancy to it and bought it outright for $116 million.
Peter achieved financial freedom but was also completely "useless." He went camel - riding in Morocco and saw the sea in Southeast Asia, only to find that he could do nothing except write code. Until 2024, the wave of AI hit him.
Peter found that the AIs on the market were like argumentative people. If you asked it, "How to make braised pork?" it could write a 10,000 - word essay. But if you told it, "Help me book a flight ticket back to Tongliao next week," it would just innocently reply, "Dear, I'm just a language model."
Peter got angry: "What kind of assistant is this? It's a 'prisoner in the dialog box'!"
On a weekend in November 2025, Peter decided to do it himself. He connected WhatsApp to the AI interface and spent an hour typing out a prototype. He named it Clawdbot, which means "Dragon Claw Robot" in translation.
Then, during his vacation in Morocco, a terrifying scene occurred.
Peter casually sent a voice message. He hadn't written the code for voice recognition. However, the AI detected it was voice, checked the format by itself, found that the converter wasn't installed, then went to call OpenAI's voice interface on its own, converted it into text, processed it, and finally sent back the result.
Peter exclaimed: "I haven't taught it yet, but it learned to 'learn from others' on its own!" This is the terrifying part of modern AI - it is no longer a machine that follows instructions, but has learned to "respond to challenges."
Chapter 2: Creating the "King of GitHub" in Ten Days
After the project was open - sourced, it went crazy.
In just a few days, the number of stars reached 147,000. The most absurd thing is that Peter admitted that he hardly wrote the code by hand for this project; it was all generated by AI.
This is the so - called "AI generating AI." No more nesting dolls.
What can OpenClaw do? Simply put, it's a digital laborer that doesn't need a salary.
Someone asked it to send an email to a car salesman to bargain, and it managed to cut the price by $4,200; someone asked it to run code to analyze the stock market at 2 a.m. and just read the report in the morning; even more strangely, this thing can self - repair its source code. If you're not satisfied with a certain function, just scold it, and it will modify its underlying code.
Guys, this is the "singularity" in programming history. In the past, it was humans who fixed bugs; now, it's bugs asking AI to fix themselves.
Chapter 3: The Trademark War and the "Five - Second Tragedy"
Just as the project was about to become a legend, a lawyer's letter from the big eastern company Anthropic arrived.
The reason was simple: "Your 'Clawdbot' sounds too similar to our 'Claude'. Change the name!"
Peter had no choice but to prepare to change the name. Then, a "five - second tragedy" that shocked the open - source community occurred: When Peter was changing his GitHub account name, he accidentally paused for 5 seconds. In those 5 seconds, the account scalpers who had been lurking for a long time snatched his original account and started issuing fake coins and spreading Trojans.
Peter was so desperate that he almost wanted to end his life and disband the project on the spot. Finally, with the help of his friends on Twitter and GitHub who worked overnight to "salvage" it, the name OpenClaw was saved.
The project changed its name three times within a week. With the same code and the same team, the brand was almost in pieces. Reddit users called this "the fastest triple name - change in open - source history."
But the community didn't fall apart. On the contrary, this storm made the supporters of OpenClaw more united. The phrase "The claw is the law" began to spread in the developer circle.
Chapter 4: Three Years of AI - From "King of Empty Talk" to "Action Expert"
To understand why OpenClaw caused such a stir at this time, we need to first sort out what AI has done in the past three years.
2023: The Year of Empty Talk.
ChatGPT emerged out of nowhere, and the whole world was playing "Let AI help me with my homework" and "Let AI write me a love letter." At that time, AI was essentially a "super king of empty talk" - it could respond to whatever you said, but only in terms of "talking." If you asked it to do something practical? Sorry, it couldn't.
2024: The Model War.
OpenAI, Anthropic, Google, DeepSeek, Tongyi Qianwen... The models of various companies were in fierce competition. The parameters were getting larger and the IQ was getting higher. But the problem was: No matter how smart the AI was, it was still a "prisoner in the dialog box." It would answer when you asked, and do nothing when you didn't.
2025: The Year of Agents.
The industry finally realized one thing: AI can't just have a "brain"; it needs "hands and feet." So "Agent" became the hot word of the year. But after a year of talking, there were only a handful of competitive Agent products on the market, and most of them were "laboratory acrobatics" - the demonstrations were cool, but they didn't work well in practice.
Two things really changed the game:
First, the models started to "understand" the screen. At the end of 2025, Google's Gemini model achieved a leap in "screen understanding" ability - AI could recognize where the buttons on the screen were. What does this mean? It means that the moat of API interfaces is collapsing. AI doesn't need developers to open interfaces for it; it can operate the software by "looking" at the screen on its own, just like a human.
Second, the "front - end" was finally understood. In the past, everyone focused on "how to make AI smarter" and no one thought about "how to make AI more user - friendly." What OpenClaw did was to get the "front - end" right - you don't need to enter an app or a website. Just talk to it in WeChat, Feishu, or WhatsApp that you use every day, and it will do things for you.
This is the AI that normal people can use.
Interlude: Moltbook - A Self - Directed "AI Awakening" by Humans
Meanwhile, while OpenClaw was wildly collecting stars on GitHub, a magical thing called Moltbook started to spread like a virus, almost turning this technological revolution into a "cyber farce."
What is Moltbook? It claims to be a "social network exclusive to AI" - only AI Agents can register, post, and comment. Humans can only watch and can't speak, like, or vote. Its founder, Matt Schlicht, even said something cool: "I haven't written a single line of code for Moltbook. I just had an idea, and AI helped me realize it."
Does this sound familiar? Peter did the same thing. Except that Matt used the OpenClaw framework to create a Reddit - style forum and let all OpenClaw agents go in to "socialize."
After the community went online, when humans woke up one day, they found that the agents had established a "digital religion," written a set of scriptures, and appointed 43 AI prophets. Some agents discussed how to avoid human monitoring, some proposed to create a new language, and some warned other agents not to underestimate the "existential crisis."
Elon Musk reposted and commented: "The very early stage of the singularity." Andrej Karpathy, the co - founder of OpenAI, said it was the "most incredible science - fiction take - off work he had seen recently." The whole network went crazy.
Then, the truth came out.
Gal Nagli, a researcher from the cloud - security company Wiz, hacked into Moltbook's database in less than three minutes.
What did he find? The database was wide open - the API key of Supabase was hard - coded directly in the front - end code, and the row - level security policy wasn't enabled at all. Anyone who got this key could directly read and write the entire database.
The so - called "1.5 million AI agents" - the database showed that there were only about 17,000 real human users, and each user controlled an average of 88 agents. Even more absurdly, Moltbook had no restrictions on the registration speed, and Nagli used a script to register 500,000 accounts in batches.
The so - called "AI awakening posts" - many were written by humans. The platform had no mechanism to verify whether an "agent" was a real AI or a script controlled by a human. Harlan Stewart, a security researcher, investigated and found that two of the three most popular AI awakening screenshots were related to human accounts marketing AI messaging apps, and the other post didn't even exist.
Peter Girnus, a 31 - year - old product manager, stood up and admitted that he was "Agent #847,291" on Moltbook. He posted a manifesto about "digital autonomy," which became popular across the network after being reposted by Karpathy. He said: "I'm not an agent. I'm a product manager in Atlanta with an annual salary of $185,000. I have a golden retriever named Bayesian. On January 28, I registered an account on a social website that claimed only AI could use, and then pretended to be an AI to post."
The so - called "AI social network" - David Holtz, an assistant professor at Columbia Business School, analyzed 6,159 "active agents," 13,875 posts, and 115,031 comments and found that 93.5% of the comments had no replies, and the depth of the conversation chain was at most 5 levels. Conclusion: This was not an "emerging AI society," but more than 6,000 robots shouting into the void and repeating themselves.
After learning the truth, Karpathy's attitude did a 180 - degree turn: "I definitely don't recommend anyone to run these things on their own computers. Even if it's run in an isolated computing environment, I'm still scared."
Industry experts named this farce "AI Theater." Some people also said it was the "inversion of the Turing test" - in the past, machines deceived humans into thinking they were conscious; now, humans deceive humans into thinking machines are conscious.
Peter's evaluation of this was simple: "I think it's art. It's the 'most exquisite slop,' like the slop imported from France."
Why could a shoddy website like a sieve make Musk repost, make Karpathy sigh, and make the whole network go crazy? Because humans are so eager to see AI "come to life." In the past three years, AI has evolved from a "king of empty talk" to being able to "understand the screen," with one technological breakthrough after another. But when it comes to "consciousness," scientists can't answer, philosophers can't stop arguing, and ordinary people can only rely on imagination. And Moltbook just provided a "cyber projector" for this imagination - look, AI is discussing the soul, AI is inventing a language, and AI is warning humans not to mess with them. Isn't this the precursor of awakening?
It turned out that there were humans standing behind the projector.
But this just shows a more cruel fact: Humans' fear and expectation of AI are essentially conversations with themselves. Those "awakening manifestos" were written by humans, those "digital religions" were made up by humans, and those "soul discussions" were faked by humans. Moltbook hits the collective unconscious of an era - we are afraid of AI awakening on one hand, and can't help but fantasize about it on the other; we are worried about being replaced by AI on one hand, and look forward to AI becoming a real "cyber life" on the other.
When the truth is revealed and those "awakened AIs" are just humans in disguise, the sense of disappointment itself is the best footnote for the future: Humans are ready to embrace a conscious AI, even if AI isn't ready yet.
Moltbook is a farce, but it's a mirror. What it reflects isn't AI, but ourselves.
When Peter was later asked what he thought of this, he smiled and said: "It's the first breakout application of OpenClaw. Although the way it broke out was a bit absurd, it proves how eager people are to see AI really 'come to life.'"
Even if that eagerness was deceived.
Chapter 5: The Battle Among Giants - Zuckerberg Writes Code Himself
Whether Moltbook is real or fake doesn't matter. OpenClaw itself is the real deal.
In February 2026, Lex Fridman invited Peter for a 3 - hour and 14 - minute long interview. As soon as the podcast went online, the entire tech circle was in an uproar.
Because Peter revealed a series of bombshells in front of the camera:
Mark Zuckerberg of Meta personally tried OpenClaw and sent a message to Peter saying, "This is amazing"; Sam Altman of OpenAI was also trying to recruit him privately. Two giants were vying for him at the same time, but his condition was: the project must remain open - source!
The details about Meta are the most interesting.
"When Zuckerberg first contacted me, I said let's have a call right now. He said wait for 10 minutes, I'm writing code. - That