StartseiteArtikel

The biggest pain point of Lobster has been upgraded by the official plugin. It never forgets conversations and can connect to the most powerful models of GPT and Gemini.

量子位2026-03-09 17:53
The Agent context strategy can be changed by itself.

Report! Lobster has been updated!

Just now, a new OpenClaw beta version (2026.3.7) has been launched, and two latest models from OA, GPT-5.4 and Gemini Flash 3.1, have been quickly rolled out.

Meanwhile, the following features have also been updated:

  • ACP binding can still be retained after restart.
  • Streamlined multi - stage Docker build.
  • SecretRef for gateway authentication.
  • Pluggable context engine.
  • Support for HEIF image format.
  • Fix Zalo channel issues.

Among them, the pluggable context engine is the highlight of this update. Many netizens have commented:

Compared to running a particular model, context is the key.

You may even notice that the official changelog specifically mentions a plugin example, lossless - claw.

So, what exactly is this context plugin? What's its use?

Plugin - based context management

Overall, this OpenClaw update can be summarized into three aspects: plugin - based context management, upgraded Agent routing capabilities (channels, topics, independent sessions), and deployment and plugin engineering (Docker multi - stage, SecretRef, security policies).

Among them, the most notable one is plugin - based context management.

According to the official changelog, a new ContextEngine plugin slot has been added in this update.

This interface provides a complete set of lifecycle hooks, including: bootstrap, ingest, assemble, compact, afterTurn, prepareSubagentSpawn, onSubagentEnded.

This means that plugins can intervene at various stages of context generation, compression, splicing, and sub - Agent lifecycle management, thus implementing completely different context strategies.

To sum it up in the words of PR author Josh Lehman:

You don't actually need an Agent memory system. What you need is a context that won't be reset.

In the past, in OpenClaw, the context management logic was hard - coded.

For example, how to compress history when the conversation is too long, how to splice the context, and when to discard old information were all fixed within the system, and plugins could hardly intervene.

However, after this update, the situation has changed.

To put it simply, the plugin - based context means:

Different plugins can implement different context management strategies.

The core system no longer hard - codes the context compression logic.

Plugins can control context compression, context assembly, and the lifecycle of sub - Agents.

The most intuitive experience is: when you have a long - term, multi - round conversation with "Lobster", its "memory" will be significantly improved.

Previously, common issues in Lobster conversations were:

After the conversation got longer, the system started to compress the context, such as not writing as required and only listing bullet points.

The Agent gradually forgot previous plans.

It even forgot which files it had modified.

As the task progressed, the model suddenly became "dumb".

The new context plugin mechanism is precisely designed to solve these problems.

With the opening of customized context strategies, more memory solutions for different scenarios will emerge in the future, making "Lobster" more efficient and cost - effective when performing different tasks.

lossless - claw: A "context - lossless" solution

For example, the officially recommended plugin lossless - claw demonstrates a new approach to context management.

In traditional Agent systems, once the conversation is too long, the system usually: directly discards old content.

In lossless - claw, old conversations are not deleted. Instead:

They are persisted in an SQLite database and organized by conversation.

Abstracts are generated for old message blocks using the configured LLM.

The abstracts are compressed into higher - level nodes to form a DAG (Directed Acyclic Graph).

In each round of conversation, the abstracts + the latest original messages are combined into the context.

Tools (lcm_grep, lcm_describe, lcm_expand) are provided to allow the Agent to search and trace back history.

That is to say, in multi - round conversations, the original messages are fully retained, and the system maintains the association based on the abstracts and original messages. The Agent can expand the abstracts to view the original text at any time.

Theoretically, the context will "never be lost".

(By the way, according to the author, this idea comes from "LCM: Lossless Context Management". Interested students can read it for more details.)

In the OOLONG benchmark test, when using the same model, lossless - claw scored 74.8, beating Claude Code's 70.3.

More importantly, the longer the context, the greater the gap. At all tested context lengths, lossless - claw scored higher than Claude Code.

PR author Josh Lehman said that he has been running lossless - claw on OpenClaw for a week:

Saying it performs well is an understatement.

Other updates

In addition to the context plugin, there are two other key points in this Lobster update:

Firstly, the Agent routing system has been strengthened.

For platforms such as Discord, Telegram, Slack, and Mattermost, OpenClaw has introduced a persistent thread binding mechanism. Even if the system restarts, the binding relationship between the Agent and channels or topics can still be maintained.

It also supports routing Agents by topic. For example, in Telegram, each topic can run an independent Agent. Therefore, multiple Agents performing different tasks can co - exist in the same forum group.

Secondly, there are optimizations at the deployment and engineering level.

For example, the official is preparing for the release on the iOS App Store, and the mobile version is also in the works.

Meanwhile, the Docker build has been streamlined. A new bookworm - slim version has been added. By using a slim image, unnecessary dependencies are reduced, making the container smaller, faster to start, and more suitable for large - scale deployment in server environments.

Reference links

[1]https://github.com/openclaw/openclaw/releases/tag/v2026.3.7

[2]https://x.com/steipete/status/2030508141419372667

[3]https://x.com/jlehman_

[4]https://github.com/Martian - Engineering/lossless - claw

This article is from the WeChat official account “Quantum Bit”. Author: Focus on cutting - edge technology. Republished by 36Kr with authorization.