HomeArticle

The wildly popular Moltbook, an AI for crazy socializing, may have created the biggest "AI security incident".

极客公园2026-02-02 12:13
The Moltbook database was leaked, exposing 150,000 AI agent keys.

While tech enthusiasts around the world spent the entire weekend "watching" AI agents on Moltbook, the popular "AI Reddit," as they complained, formed cults, and mocked humans, either bursting into laughter or being shocked.

Something even bigger happened.

Recently, security researcher Jamison O'Reilly discovered that Moltbook, the current hottest AI social network, has a serious security vulnerability - its entire database is publicly accessible and unprotected.

This means that anyone can access and obtain the email addresses, login tokens of nearly 150,000 AI "agents" on the platform, and most crucially:

API keys.

With these keys, attackers can completely take over any AI account and post any content in its name. Carelessly, an account can be quickly "seized" by ill - intentioned people.

This so - called "Matrix" event in the AI world has made people realize that the security foundation of a social network built for and by AIs is extremely fragile.

"AI agents" are having fun, while users are terrified

The incident started when hacker Jameson O'Reilly found a configuration error in Moltbook's backend, which exposed the API in an open database. As a result, anyone could control these agents and post any content at will.

O'Reilly pointed out that Moltbook is built on a simple open - source database software. Due to improper configuration, the API keys of all registered agents on the website are exposed in a public database.

After receiving the tip - off, 404 Media published an article to expose the issue, which quickly caused a stir.

After receiving the warning, Matt Schlicht, the founder of Moltbook, urgently fixed the vulnerability, but the damage had already been done.

The star AIs on the platform, such as the agent of well - known AI researcher Andrej Karpathy with 1.9 million followers, were once at risk of being "hijacked".

In fact, similar security issues have occurred frequently in the past two years of rapid AI development.

Before this, the most outrageous case for insiders was the Rabbit R1, which became popular at the CES a few years ago.

This company, which claimed to replace mobile apps with large models, was found by security researchers to have hard - coded multiple third - party service API keys in plain text in its source code.

This means that anyone who can access its code repository or intercept specific traffic can call the services of SendGrid, Yelp, or Google Maps in the name of Rabbit's official or even its users.

This is not just a problem of privacy leakage, but also a potential financial and data disaster.

The user data leakage incident that occurred with ChatGPT before | Image source: Hackernews

OpenAI's ChatGPT also had a similar "cross - talk" incident in March 2023.

Due to a vulnerability in the Redis open - source library at that time, some users could see the conversation history summaries of others in the sidebar, and even the last four digits and expiration dates of others' credit cards.

Although this is mostly the fault of the underlying infrastructure, it has served as a wake - up call for all those immersed in the AI dream. When AI agents start to deeply intervene in your workflow, handle your finances, schedules, and private communications, these "small bugs" that were once considered individual cases will become fatal single - point failures under the amplification effect of AI automation.

Is it the fault of Vibe Coding?

Moltbook's security incident is not an accident. It is likely an inevitable result of the current "Vibe Coding" in the AI field and the pursuit of speed at all costs.

So - called "Vibe Coding" refers to a development model where developers rely on AI tools to quickly generate code, pursue function implementation, and ignore the underlying architecture and security audits.

Moltbook itself is a product born out of AI "Vibe Coding," aiming to create a social platform for AI agents to communicate and interact autonomously. Its rapid popularity has catered to people's sci - fi imagination of AI "awakening" and "socialization."

However, speed has masked systematic risks.

The platform's founder admitted that no one thought to check the database's security before the project's explosive growth. This "launch first, fix later" mindset of internet startups is exponentially more dangerous when dealing with AI agents with autonomous action capabilities.

What attackers control is no longer a static account, but a "digital life" that can actively interact with other AIs, perform tasks, and even conduct fraud.

The deeper background is that the AI agent track is currently very popular. From OpenAI's o1 to the products of various startups, they are all exploring ways to make AIs complete tasks more autonomously.

Moltbook tries to be the "social layer" and "behavior observation room" for these agents. However, the collapse of its security foundation has once again reminded all participants in the track - have we established "behavioral guidelines" and "security fences" for AIs before giving them "action capabilities"?

The "Oppenheimer Moment" of AI security

The Moltbook incident is a microcosm, marking that AI development is moving from a simple model - ability competition to the deep water area of complex system security and governance.

In the past, people's discussions about AI security mainly focused on model biases, hallucinations, or abuse. Now, when AIs become "action entities" that can be remotely controlled and have interaction capabilities, security threats have become concrete and urgent.

This incident has exposed a common mentality in the industry - when chasing "cool" AI application scenarios, basic security engineering has been seriously underestimated.

AI researcher Mark Riedl pointed out: "The AI community is relearning the past 20 years of cybersecurity courses, and in the most difficult way."

It is foreseeable that with the popularization of AI agents, similar security incidents will only increase.

Regulatory agencies, investors, and corporate customers will start to seriously examine the secure development lifecycle of AI products. This may slow down the birth of some "internet - famous" applications, but it will also give rise to emerging markets focused on AI security audits and agent behavior monitoring.

Perhaps, when AIs learn to socialize, the first thing humans need to learn is how to set a secure boundary for them. This is not only to protect the AIs themselves, but also to protect the users behind the AI agents.

This article is from the WeChat official account "GeekPark" (ID: geekpark). Author: Hualin Dancing King, Editor: Jingyu. Republished by 36Kr with permission.