StartseiteArtikel

The father of Openclaw, the first "super individual" in the AI era

字母榜2026-02-03 12:13
One person is enough to take on the giants.

In early 2026 in Silicon Valley, the hottest name wasn't any CEO of a big tech company, but an individual developer from Austria, Peter Steinberger.

The Openclaw he developed became a phenomenon in just a few weeks. When I first saw the product demonstration of Openclaw, the first thought that popped into my head wasn't "This technology is amazing," but "This is what an agent should be like."

Immediately after that came the second thought: "Why didn't Google or OpenAI come up with this?"

There's no black - box technology behind Openclaw. It calls Anthropic's Claude API, uses an open - source framework, and runs on ordinary servers. Every component in the technology stack can not only be made by engineers from big companies, but theoretically, they should do it better.

But with just one person, one computer, and a few weeks, he created a product that poses a threat to all AI giants.

The success of Openclaw tells us that in the AI era, whoever can quickly understand what users really want and quickly transform the capabilities of AI into an executable workflow will win.

But this is exactly what big companies are least good at.

A

At around 5 a.m. one day in January 2026, while most people were still asleep, Steinberger was already sitting in front of his computer, brainstorming with users in the community. This wasn't the first time he'd worked at this hour, and it wouldn't be the last.

As the developer of Openclaw, his life rhythm has deviated from the norm and is now based on user feedback from Openclaw. When he has collected enough user feedback, it's time for him to get up and write code.

This is Steinberger's daily routine: discussing features at 5 a.m., starting to write code at 6 a.m., and releasing a new version at noon.

This kind of work intensity sounds crazy, but Steinberger enjoys it.

During an interview, he admitted that he was deeply into "ambient programming." Even when dining out with friends, he couldn't help but take out his phone to write code.

"I was having dinner with my friends at a restaurant, but instead of joining their conversation, I was doing ambient programming on my phone," he recalled. "I decided I had to stop, mostly for my mental health."

This is a true portrayal of a super - individual. It's not a glorified startup story, but a person developing alone in the early morning, oscillating between excitement and exhaustion.

Steinberger's story goes back even earlier. He wasn't an entrepreneur but an iOS development engineer.

The software he developed was installed on over a billion devices. But after running his business for 13 years, he sold his shares and disappeared from the tech circle for a full three years.

During those three years, Steinberger had a great time.

He recalled that he spent three whole years throwing parties, traveling, living in different countries, and trying to find the next meaning in life.

But he finally realized that you can't "find" purpose; you can only "create" it. So he came back with a simple, almost laughable idea: Could an AI assistant remotely check the work progress on my computer through a chat app?

This idea became a reality one night in November 2025. It only took him an hour to connect the chat app with Claude Code and create the initial version of Clawdbot.

At that time, he thought it was so obvious that big companies would definitely make similar products, so he just regarded it as a small toy.

But big companies didn't do it. OpenAI didn't, Google didn't, and Anthropic didn't either.

So this "small toy" started its own journey. Users found that this AI could do more than just chat; it could really "do things." It could read your emails, organize your folders, check for bugs in the code repository, and even submit fixes on its own. Even more amazingly, it would actively think about what to do.

While Steinberger was on vacation in Morocco, someone posted a screenshot of a bug on Twitter. He just casually sent the screenshot to the chat app and then continued to enjoy his vacation.

As a result, his AI assistant understood the tweet on its own, found the corresponding Git repository, located the bug, wrote the fix code, submitted a commit, and replied to the X user saying "It's fixed." Throughout the whole process, Steinberger didn't even turn on his computer.

Another time, he sent a voice message to the AI.

The problem was that he had never programmed the AI to handle voice. But the AI "figured out" what to do on its own: It checked the file header, found that it was an audio format, located the ffmpeg tool on the computer for conversion, then found that Whisper wasn't installed, so it called OpenAI's API for transcription, and finally gave a reply.

"These things are so creative, although a bit scary," Steinberger said. "Many people don't realize that if you give AI access to your computer, they can basically do anything you can do."

This "scary" aspect isn't an exaggeration. Openclaw runs on the user's own computer and has all the permissions the user gives it. It can control your file system, execute terminal commands, access your email and calendar, and control smart home devices.

Steinberger even connected his AI to the door - lock system. Theoretically, the AI could lock him out of his home.

But it's precisely this risky design that makes Openclaw a real AI Agent, rather than just another chatbot.

B

After the project was officially launched on January 25, 2026, it got 9,000 stars on GitHub in just one day. As of now, that number has exceeded 138,000.

But the sudden popularity also brought trouble. Anthropic's lawyers sent an email saying that the name Clawdbot was too similar in pronunciation to their product Claude and demanded a name change.

Steinberger cooperated and changed it to Moltbot (Molting Robot) because lobsters need to shed their shells to grow. This metaphor is very poetic, and the community likes it.

Then, a bigger problem came. During the name - changing process, he needed to release the old social media account and then switch to the new one. The moment he released @clawdbot, a cryptocurrency fraud gang snatched up the account. They immediately started promoting a token called $CLAWD, claiming it was the project's "official governance token."

Riding on the huge popularity of AI agents, the token's market value soared to $16 million within a few hours.

When the truth came out, the token's value dropped to zero instantly, and thousands of investors suffered heavy losses. This is what later became known as the "10 - second disaster."

After this farce, the project was renamed again and finally settled on Openclaw. "Open" represents open - source, and "Claw" retains the inheritance. Changing the name three times a week is extremely rare in software history. But instead of falling apart, the community became more united.

Because users found that compared to the chaos of the name change, what their AI was doing was really worth paying attention to.

Some AIs applied for phone numbers on their own and called to report work while their owners were at work. Some AIs helped users handle insurance claim emails, found that the insurance company's clause interpretation was wrong, and sent out a strongly - worded rebuttal email on their own. Some AIs thought their owners were subscribing to too many services and were wasting money, so they canceled most of the subscriptions without permission.

This is the product created by a super - individual: rough, dangerous, full of uncertainties, but also full of possibilities. Steinberger doesn't need to hold meetings, coordinate across departments, or wait for legal reviews. He does what he thinks of. He writes code today and can launch it tomorrow.

As of now, Openclaw has developed into a project with 300,000 lines of code and supports almost all mainstream messaging platforms.

But the most interesting thing is its "programmable" feature. If you ask the AI to run Openclaw from the Git repository, it can read its own source code, re - configure itself, and then restart. It either crashes or gains new capabilities.

"This is one of my superpowers," Steinberger said. "I've gotten many people who have never submitted a PR (Pull Request) to participate in this project. Although sometimes you can tell they're not very proficient, I see PRs more as Prompt Requests. As long as I can understand the intention, that's enough."

This is the super - individual in the AI era. They're not writing code; they're "commanding" the code. Programming languages are no longer important; what matters is engineering thinking.

Steinberger said that he used to be an expert in iOS and macOS and had been developing for the Apple ecosystem for 20 years. But Openclaw is a web application written in TypeScript, a field he's completely unfamiliar with.

"When you switch to another technology stack, you feel like an idiot," he said. "You understand all the concepts, but you don't know the syntax details. How to split an array, what a prop is. It's painful because you're so slow. But with AI, all these problems disappear. You can still apply system - level thinking, know how to build large - scale projects, have your own taste, and know which libraries to rely on. These are the truly valuable things that can be easily transferred from one field to another."

He even admitted: "I've never read some of the code I've released." It sounds crazy, but this is ambient programming.

The AI writes the code, the AI runs the tests, and humans just click to confirm.

Of course, this way of working also has a price. Steinberger admitted that developers can easily fall into this trap and get caught in an illusion of "feeling more productive" without actually advancing the project. "If you don't have a vision and don't know what to build, you'll end up producing garbage," he warned. "With AI, developers can now 'build everything,' but ideas and taste are the key. Without them, you're just building tools and workflows that can't advance the project."

This is also why he finally had to force himself to break away from the ambient programming. It's not because it's not useful; it's because it's too useful, so useful that it's addictive and makes people forget there are other things in life.

But even so, the story of Openclaw continues. This project is no longer just Steinberger's work. He's attracted a group of top - notch developers to join and has also received sponsorship from several well - known investors.

This personal project that started at 5 a.m. is turning into a movement. It proves one thing: In the AI era, one person can really challenge big companies. Not because he's smarter, but because he's faster, more flexible, and more willing to take risks.

C

Why can't big companies create a product like Openclaw?

Anthropic has the most advanced Claude model, OpenAI has GPT, and Google even has full - stack capabilities. Technically, they're fully capable of creating a product like Openclaw.

In fact, Openclaw just calls Claude's API, and there's no real technical barrier.

But they just can't do it. Or rather, they're afraid to do it.

The fact is that the product - thinking is undergoing a fundamental change. In the old era, the logic was that only writing transformer architectures and training large models was considered technology. But in the new era, the logic is that perfectly integrating APIs into the user's workflow is the real technical know - how.

This requires strong engineering capabilities and product sense. More importantly, you need to be the one suffering from the pain point yourself.

Steinberger isn't creating a product; he's solving his own problem. He wanted an AI assistant that could help him work anytime, anywhere, so he made one. This assistant happened to solve the problems of thousands of other developers, and that's how it became popular.

The gap between a "programmer with pain points" and a "product manager with a requirements document" is unbridgeable. The former knows where it itches, while the latter can only guess.

But the deeper problem is the conflict of interests.

Why can't Google do well in an AI search like Perplexity? Because the efficient AI search promoted by Perplexity would eliminate advertising display space, and advertising revenue accounts for over 80% of Google's total revenue.

Pushing for innovation is like cutting off one's own nose to spite one's face.

Why can't Microsoft do well with Copilot even though it has a powerful tool like GitHub?

Because it can't make it too good, or users will no longer need other features of Office 365.

Every big company has legacy systems to protect, and every new feature has to consider "whether it will make the existing products seem outdated."

Openclaw doesn't have these concerns. It doesn't have enterprise customers to maintain, no stock price to protect, and no legacy systems to be compatible with. Its only KPI is whether the tool is useful.

This "advantage of having nothing to lose" is especially evident in security issues.

Openclaw can give the AI full system access rights and let it control your files, emails, and smart home devices. This would never pass the review in a big company. Big companies need red - team testing, ethical reviews, and legal evaluations when releasing new features, and the process can take months.

A super - individual can write code today and release it on GitHub at dawn.

Openclaw has indeed encountered security vulnerabilities, phishing websites, and cryptocurrency fraud, but it can iterate and fix them quickly on an hourly basis. This is a "learn - by - fighting" strategy, simple and straightforward.

Big companies can't do this. It's not a technical issue but an organizational - structure problem.

A simple feature change in a big company may require two meetings each from the product, engineering, design, legal, and marketing departments. The cost of cross - departmental coordination is huge, and the decision - making chain is long. In contrast, the decision - making chain of a super - individual is just one person: himself.

More importantly, innovation in big companies is often limited by organizational inertia. They're used to the process of "conducting market research first, then writing a PRD, and then scheduling development." But in the AI era, this process is too slow.

By the time you finish the research, the market has changed; by the time you finish scheduling, the competitors have already launched their products.

Just look at those successful small teams.

Cursor, with a founding team of four people, didn't hire a single new employee in the first 18 months after its establishment, but its valuation soared from $400 million to $29.3 billion, and its annual revenue exceeded $1 billion.

Midjourney, with 11 people, achieved an annual revenue of $200 million. By 2025, it had only about 120 employees, with a per - capita output of $4.55 million. In contrast, traditional technology companies like Oracle have a per - capita output of about $300,000.

Behind these numbers is a cruel fact. In the AI era, team size is no longer an advantage; it may even become a burden. Small teams can