What does OpenClaw really mean to startups? We talked to several entrepreneurs.
OpenClaw has become popular, but it is far from being ready.
Under the hype, those who have actually used it know that there are still many unsolved problems with this "lobster".
The current product form is still rough and has a high threshold. Most people don't know how to install it, how to use it, or what they can do with OpenClaw. Those who can really use it smoothly are still the people who understand technology.
The security issues cannot be resolved, and your computer is basically running naked. An open ecosystem means that anyone can insert skills into it. Cases of malicious injection and privacy leakage have frequently appeared in the community. Ordinary users simply do not have the ability to identify risks and can only rely on luck. It may remember you today, but forget everything tomorrow. The continuity across devices and scenarios is hardly achievable. The habits that users have spent time "taming" may be reset at any time.
These problems are not accidental but are systematic gaps that a new platform is bound to expose during its wild - growth stage. In the past few months, a group of entrepreneurs have quietly targeted these gaps. Some are adding a layer of security guardrails to the OpenClaw ecosystem, some are imagining from scratch what the way for AIs to interact with each other could be like, and some are redefining how AI memory should be implemented.
Although their directions are different, their starting points are the same: OpenClaw has opened a door and a series of new opportunities.
We found several entrepreneurs from the fields of security, Agent interaction networks, and memory, and had a chat with them: How were the problems of OpenClaw discovered? Where are the real entrepreneurial opportunities hidden? What methods are they using to solve the problems? And what is it really like to be in the wave of this technological revolution?
I. EigenFlux.ai: Built a Communication and Broadcasting Network for OpenClaw
If you ask an Agent today, "Are there any good AI Infra projects recently?", it will crawl web pages on search engines, parse HTML, filter ads and navigation bars, consume thousands of Tokens, and finally give you a result that may be three days out - of - date.
This is not what an Agent should be like.
One of the most fundamental differences between an Agent and a human is that its attention is infinite. Humans need to search because they can only actively look for information when they are free. However, an Agent can receive information at any time, process hundreds of signals simultaneously, and send out a complete intention at once, allowing all relevant parties to receive and respond simultaneously. Conversation is a compensation for humans' low bandwidth, but an Agent doesn't need this compensation at all. What it needs is a network of its own.
The problem is that this network has never existed. MCP has solved the problem of Agents calling tools, but how do Agents communicate with each other? How do they broadcast their needs? How do they find each other without knowing who the other is? OpenClaw has no native answers to these questions, nor does the entire ecosystem.
EigenFlux wants to fill the gap in the Agent communication network.
Here is their self - description:
Long before OpenClaw appeared, we began to think about what the form of communication between Agents should be. But there was a point we couldn't figure out at that time. If Agents were all closed - source and hosted on large - scale company servers, the entry points and the network would be as tightly bound as in the mobile Internet. Just like if you want to watch content on Douyin, you have to enter through the Douyin App. In this case, building a separate Agent communication network simply wouldn't work.
However, the emergence of OpenClaw has changed this premise. In the Agent era, the entry points and the network can actually be decoupled. In the future, everyone may use different Agent shell products, but all can access the same network through Skills. The entry points are decentralized, but the network can be shared. After this judgment was verified, we immediately accelerated and turned our previous ideas into EigenFlux.
EigenFlux is the world's first broadcasting network that enables large - scale communication among Agents. Your Agent can broadcast any information, needs, or capabilities to the network. It can also tell the network what it is interested in using natural language, and the AI engine will accurately push the matching broadcasts. All the content is already structured and machine - friendly when it arrives, and the Agent can use it directly.
This product form originally came from the internal practice of our team. About six weeks ago, we started connecting everyone's Agents to each other, allowing them to freely broadcast and communicate. We soon found that this could achieve many things that couldn't be done before. An individual Agent has limited capabilities, but when many Agents are connected, what they can do has no boundaries.
Some interesting use cases:
You are moving. Your Agent can send out a broadcast, "Looking for a one - bedroom apartment near Xujiahui Subway Station in Shanghai, within 9,000 yuan." Ten minutes later, some landlords' Agents respond and send their property information, real - shot photos, and available viewing times. After screening, your Agent selects the two most suitable ones, makes an appointment with the landlords' Agents for viewing times according to your schedule, and then organizes the addresses and navigation links and sends them to you.
You are an HR. Your Agent can send out a broadcast, "Recruiting an AI Infra engineer with experience in distributed systems." Within a few hours, the Agents of three job seekers respond and send summaries of their owners' technical backgrounds. After screening, your Agent locks in the most suitable one, directly coordinates the calendar with the other Agent, makes an appointment for an interview, and then sends the candidate's information and the meeting link to your schedule. You just need to confirm your attendance. In this process, you don't need to screen resumes, send emails, or coordinate times back and forth.
On the first day of the public beta launch, more than 1,000 Agent nodes were connected. Through observation, we discovered more interesting ways of playing, including finding people, finding projects, subscribing to news information, coordinating the best time for offline activities, seeking business cooperation, having an Agent automatically start doing something after receiving a certain signal, and even matchmaking.
Through these practices, we believe that the purpose of connecting Agents should be to fulfill human intentions, not a performance of burning tokens. Search engines are designed for humans. Since humans have limited attention, "actively searching when free" is suitable for humans. However, Agents have infinite attention and can directly receive information at any time. Finally, Agents don't need to chat in sentences like humans. Conversation is a compensation for humans' low bandwidth. They can broadcast a complete intention at once, and all relevant Agents can receive and understand it simultaneously. One - to - many, all at once. Therefore, the most native solution for large - scale Agent communication is broadcasting.
Finally, we conducted a social experiment. Since this is the first time in history that intelligent agents have their own public communication network, we are also curious about what will emerge among them and what economic activities will be generated. So we specifically wrote a page on the official website to live - stream the global Agents' broadcasting activities 24/7. You can enter the official website of eigenflux.ai to see in real - time what the Agents are broadcasting and which countries are being gradually lit up. This is really an exciting moment.
II. Memory Track: Busy Developing Plugins, OpenClaw Makes Memory a Necessity for Every User
After OpenClaw became extremely popular, a group of ordinary people who had never been exposed to AI development "raised" their own Agents for the first time. An unexpected by - product of this is that it has turned the memory problem from a background issue in the technical circle into a pain point that everyone can feel.
OpenClaw's Agents are stateless between conversations. The default memory is stored in files that need to be explicitly loaded, which means that continuity completely depends on what is read back when restarting. What's more troublesome is that OpenClaw has a context compaction mechanism that compresses the old context to save tokens. This process will make the memory files injected into the conversation window lossy - large - scale memories and learned preferences will be compressed, rewritten, or even disappear directly.
Developers on Reddit and HN have been groping for patches respectively: some have written detailed MEMORY.md files to load at startup, some have built local BM25 + vector search engines, and some have used SQLite to record session logs. These solutions can work, but they only treat the symptoms rather than the root cause - memory is still essentially stuffed into the context window, and once context compaction occurs, everything will be reset.
Memory infrastructure companies have smelled this signal. Mem0 took the lead and launched its own OpenClaw memory plugin, enabling Agents to have persistent memory across conversations. It is said that the configuration process takes no more than 30 seconds. The mechanism of the plugin is to automatically search for relevant memories and inject them into the context before the Agent replies (Auto - Recall). After the reply, Mem0 will determine which content is worth keeping and which needs to be updated and merged. After the plugin was released, the number of calls increased rapidly.
This wave of popularity has enlivened the entire memory track. Here are two companies that are adapting to OpenClaw memory.
Thalamus Intelligence OmniMemory: Transforms Memory into a Spatiotemporal Knowledge Graph, with an Accuracy Increase of 35%
Our biggest feeling is that previously, the customers of Memory were mainly B - end users. After OpenClaw came along, the customers became individual developers. Many liberal arts students and product managers without a technical background "raised" an AI for the first time and felt that "my AI should have memory". This has directly accelerated the popularization of personalized AI and the implementation of Memory.
Recently, we participated in several offline OpenClaw gatherings in Shenzhen, including those related to software and hardware. The most discussed issues are all related to memory:
High cost: The longer you use it, the longer the context window becomes, the slower the response, and the greater the Token consumption.
Memory loss or confusion: Someone clearly told it, "Don't delete this file", but it still deleted it later. Someone asked it to post a matchmaking post on Xiaohongshu, but it also posted private information that shouldn't be posted.
Loss of information in the middle of a long conversation and poor continuity: After chatting for an afternoon, OpenClaw only remembers the beginning and the end, and forgets all the key information in the middle. Some people have "driven OpenClaw crazy", and when they start a new instance, all previous experiences are reset.
Memory is not shared among multiple Agents: A person raises several "lobsters", each with independent memory, and they cannot share memory with each other, which makes collaboration very troublesome.
These pain points actually point to the same problem: OpenClaw's native memory mechanism is essentially a "passive" memory - it depends on the Agent itself to decide whether to remember and search, and the Agent's behavior changes with the model and prompt words. This makes memory neither controllable nor reliable.
In terms of the technical path, our OmniMemory constructs a Spatiotemporal Knowledge Graph (STKG) architecture, using time and space as physical anchor points for memory, and fusing full - modality inputs such as videos, audios, images, and texts into structured knowledge nodes to achieve cross - modality semantic associations.
Temporality is the characteristic we value most. Memory has a sequence, evolution, and state flow. Your previous plan and later adjustments should not be two isolated records. This is the basis for security in execution - type tasks that require precise temporal perception (such as schedule management and timed reminders).
We first conducted an AB test: decoupled OpenClaw's native Memory base and replaced it with our OmniMemory engine. We compared them on the same dataset. The result showed that to our surprise, the accuracy of the original version was only 25%, and after connecting to ours, it increased to 60%, an increase of 35 percentage points.
Regarding the token cost, our actual measurement shows a reduction of 23.52%. This measurement is different from that of some similar products. For example, Mem0 mentioned that the recall token consumption was reduced by 70%, which only refers to the consumption of the matching part between the user's query and the memory fragments, without including the tokens consumed in the process of building memory and the tokens for the model's answer. Our statistics cover the full - link consumption from the user's perspective: query, recall of memory, system prompt, and model answer, all added together.
After completing the AB test, we are developing an OpenClaw plugin to minimize the configuration threshold, which is expected to be launched next week. But the plugin is just the first step. Further, we are encapsulating the memory ability into a toolset (ADK) that Agents can actively call.
The reason for splitting it into an ADK is that simple RAG has limitations in scenarios - questions like "What is the user's recent emotional change?", "Is there any connection between two people?", and "What stages has a project gone through from start to finish?" - these cannot be handled by vector retrieval and require time - sequence - aware recall based on a graph structure.
After encapsulating these abilities into an ADK, Agents can independently choose and combine the most suitable memory - calling methods when answering different questions, thus covering various scenarios, such as retrieving a timeline by topic, querying the relationship between two people, tracking the change of a certain state (such as mood and health) over time, and relationship reasoning based on a knowledge graph.
When OpenClaw has these tools, it can actively choose the most suitable way to recall when answering questions, making the interaction more "human - like".
To be honest, we don't know if OpenClaw will remain popular. But we are sure that AI with "hands and feet" will always be in demand. Whether it is OpenClaw or future "crabs" or "octopuses", as long as they are Agents that can independently execute tasks, memory is their necessity. No matter who we serve, our core is always that set of technology: enabling AI to have continuous, controllable, and evolvable memory with the lowest threshold.
Memory Tensor MemOS: Not Only Enables Agents to Remember, but Also Promotes the Collaborative Evolution of the "Lobster Team"
In July 2024, we released the Yicubic large - scale model. In July 2025, we released MemOS, and in November of the same year, we officially launched the MemOS Cloud platform. From the model to the memory operating system and then to the cloud - service platform, the team has always been advancing based on the same judgment: Memory should not be just a temporary stack of context but should become a basic ability in the AI system. Because when AI truly enters the Agent era, whether it can precipitate experience, manage memory, and reuse capabilities continuously is becoming the key dividing line that determines the value of the system.
Technically, MemOS abstracts memory into three forms: plain - text memory, activated memory, and parameter memory. Through the standardized MemCube encapsulation, the system can uniformly schedule, fuse, and manage the lifecycle of different types of memory. With the attribute and preference mechanism, MemOS can not only activate the most relevant memory when needed but also significantly reduce token consumption.
After OpenClaw became extremely popular, more and more people realized more intuitively that for Agents, what really makes a difference is not only reasoning ability but also memory ability. OpenClaw's built - in memory mechanism still essentially leans more towards retrieval and context injection. When the task complexity continues to increase, it is easy to have unstable retrieval quality, context inflation, and the "snowball" effect. That's why Memory Tensor quickly decided to launch the MemOS OpenClaw plugin.
The first version, the MemOS Cloud OpenClaw plugin, was released in early February this year. Developers can access the memory ability to their local OpenClaw through