StartseiteArtikel

Just now, OpenClaw has released its most powerful update ever! AI memory can be freely plugged and unplugged. Developers have been waiting for this for half a year.

新智元2026-03-09 11:38
OpenClaw released v2026.3.7-beta.1, adding the ContextEngine plug-in interface and fixing over 200 bugs.

Reported by New Intelligence Yuan

Editor: Dinghui

[New Intelligence Yuan Introduction] OpenClaw has launched v2026.3.7-beta.1, the most intensive update in history: 89 commits and over 200 bug fixes. The core highlight is the brand - new ContextEngine plugin interface - finally, context management can be "plug - and - play", and strategies can be changed without modifying the core code. This update is worth a careful look for everyone working on AI Agents.

After the "lobster fever", the popularity of OpenClaw has skyrocketed in China.

Just now, the official OpenClaw released the brand - new v2026.3.7-beta.1 version, and its founder, Peter Steinberger, personally announced it on X in a high - profile manner.

89 code commits, over 200 bug fixes, a brand - new ContextEngine plugin interface, and dual first - time adaptations for GPT - 5.4 and Gemini 3.1 Flash - this update is very solid.

By the way, not only has the number of stars of OpenClaw soared to over 30,000, but the numbers of issues and PRs also lead the world.

OpenClaw can be regarded as a milestone in open - source development and also a masterpiece by geeks around the world. What project on Earth can unite so many enthusiastic geeks?

Without further ado, let's first see what's new in this update.

The Most Intensive Update of OpenClaw

Highlight One: Dual - Engine Launch of GPT - 5.4 + Gemini 3.1

The new version is fully compatible with OpenAI's latest GPT - 5.4 and Google's Gemini 3.1 Flash.

In terms of model switching, OpenClaw has also optimized the model downgrade and retry mechanism - when a certain model is rate - limited or overloaded, the system will automatically switch to an alternative model instead of directly reporting an error and making users wait.

What does this mean? You can imagine OpenClaw as a "model router". The front - end is connected to your favorite chat tools, and the back - end can flexibly mount any large models such as Claude, GPT, Gemini, and DeepSeek. Use the one that works best and switch to the cheaper one. This architectural flexibility cannot be achieved by AI assistants from a single vendor.

Highlight Two: ContextEngine - Something Developers Have Been Waiting for

The most technically impressive highlight of this update is the newly launched ContextEngine plugin interface.

Friends who have developed AI applications know that context management is one of the most headache - inducing problems in agent development.

As the number of dialogue rounds increases, the number of tokens explodes; once the information is compressed, key details are lost.

OpenClaw has now opened a complete set of lifecycle hooks: bootstrap (initialization), ingest (injection), assemble (assembly), compact (compression), afterTurn (post - round processing), and even prepareSubagentSpawn (before sub - agent generation) and onSubagentEnded (after sub - agent termination).

In plain English: Developers can now completely customize the context processing logic without modifying the core code of OpenClaw. Do you want to use RAG? Sure. Want to do aggressive compression? Go ahead. Want different subtasks to have isolated memory spaces? The interfaces are all ready for you.

This has great significance for the entire community ecosystem - it transforms OpenClaw from a tool into a platform.

Highlight Three: In - Depth Integration of Discord + Telegram

Multi - channel support has always been a core selling point of OpenClaw, and this update has made significant upgrades on the two most active community platforms:

On the Discord side, the problem of the system freezing and being unable to recover after disconnection has been fixed, and channel parsing and bot heartbeat detection have been optimized. On the Telegram side, a new topic - level agent routing isolation has been added - which means you can run different AI agents in different topics within the same Telegram group without interference.

Meanwhile, a persistent channel binding feature has been added to both platforms. Previously, the channel binding relationship might be lost after restarting OpenClaw; now this state will be persistently stored and automatically restored after restart.

Highlight Four: Over 200 Bug Fixes, Almost a Complete Overhaul

The list of bug fixes this time is astonishingly long. Roughly classified as follows:

At the channel level, it covers issues such as repeated Telegram draft flows, Discord disconnection and freezing, Slack message routing, Feishu Webhook compatibility, WhatsApp self - chat prefix injection, and various edge cases on iOS/macOS.

At the core agent level, issues such as parameter parsing for tool calls (including parameter decoding for xAI), loss of truncation prompts during context compression, and compatibility with OpenAI's streaming output have been fixed.

At the gateway and memory level, problems such as preventing consecutive Token disconnections, deduplication in QMD memory retrieval, and SQLite lock conflicts have been solved.

At the security level, it includes security upgrades for dependent libraries (Hono, tar, etc.), prevention of sandbox escape, and whitelist authentication for system command execution.

In addition, the new version also introduces Spanish interface support, upgrades the web search function to a more powerful SearchAPI, and optimizes the image size and startup speed through multi - stage Docker builds.

What Does 89 Updates Mean?

Let's give a brief introduction for those who are not familiar: What is OpenClaw?

Simply put, it is one of the most influential AI Agent frameworks in the open - source community at present. It can be understood as the underlying platform that allows AI to work for you.

You can deploy it on your own server, connect it to various large language models, and then integrate it with channels such as Slack, Discord, Telegram, and Feishu, allowing AI Agents to handle user messages, execute tasks, manage context, and even coordinate multiple agents to work together for you.

Its open - source nature is its greatest advantage. You don't need to pay any fees to vendors, keep your data in your own hands, and decide which models to connect to.

In this 3.7 beta version, the 89 code commits are not just numbers - each one is substantial:

The most important one: The launch of the ContextEngine plugin interface.

For developers, context management is the most headache - inducing problem in AI Agent engineering. When the number of dialogue rounds increases and the token accumulation exceeds the model's window limit, you either truncate, compress, or find other solutions. Once these processing logics are hard - coded in the core code, any adjustment in the future will be a high - risk operation.

The ContextEngine interface in OpenClaw 3.7 solves this problem.

It provides developers with full - lifecycle context management capabilities and allows you to insert custom compression strategies without modifying the underlying core logic. In the official words, this is called zero - obstacle access.

In plain English: In the future, changing a context processing algorithm will be as simple as changing a plugin, and you don't have to worry about the whole system crashing due to a single change.

The reaction from the developer community is straightforward - under the project's issues, someone commented that they had been waiting for this interface for almost half a year, and the number of likes exceeded a hundred in seconds.

Over 200 Bug Fixes, That's the Real Deal

The density and quality of bug fixes are a litmus test for whether an open - source project can operate in the long term.

In this 3.7 beta version, the official update log lists over 200 specific bug - fix records in the "Fixes" section, covering almost all core modules of the platform.

When the capabilities of large language models are already strong enough, the threshold for actually using them determines who can take the lead. OpenClaw solves this very problem.

The popularity of the "lobster" has not only boosted the popularity of a certain product but also attracted attention to the entire self - hosted AI Agent track.

The Founder's Personality

A Slightly Arrogant Open - Source Enthusiast

It's very necessary to talk about Peter Steinberger.

He started as an iOS developer and became famous in the European independent developer circle with PSPDFKit (a PDF processing SDK). Later, he transitioned to AI and led a small team to develop OpenClaw, which has become a well - known project in the open - source AI Agent field.

His style on X (formerly Twitter) is very typical: highly technical, direct, and occasionally sarcastic. When announcing the release of the 3.7 Beta version, he wrote: I can't remember the last time a version had so many commits. But it was worth it. This kind of confident restraint holds its own weight in the open - source community.

More importantly, the team he leads spends almost no time on market promotion and relies on product quality and community word - of - mouth to grow. This seems a bit out of place in today's AI circle where half of the efforts are on product development and half on PR.

A good underlying tool doesn't need to wait for recommendations from big companies. Users in real - world scenarios will spread the word for you.

What Can We Expect from the Next Version?

From the update direction of this 3.7 beta version, we can infer several points worthy of attention in the future:

Accelerated expansion of multi - language interfaces.

The addition of the Spanish interface this time indicates that OpenClaw is deliberately expanding into non - English markets. If full support for the Chinese interface is achieved, the attractiveness to Chinese users will increase exponentially.

Continuous expansion of model ecosystem integration.

The first - time adaptations for GPT - 5.4 and Gemini 3.1 Flash are signals that mainstream large - model labs have started to actively cooperate with OpenClaw on adaptation work. Once this ecological flywheel starts spinning, the barriers will become higher and higher.

Continuous improvement of enterprise - level stability.