首页文章详情

Träumen LLMs von AI-Agenten? Nein, sie müssen auch im Schlaf noch überstunden machen.

新智元2025-09-16 15:45
KI ordnet Erinnerungen durch "Schlafrechnen" und imitiert das Vergessensmechanismus des menschlichen Gehirns.

[Introduction] The human brain filters memories during dreams. Now, AI has also started to learn to organize, store, and even forget during its "sleep." Bilt has deployed millions of intelligent agents, gradually bringing the question from science fiction - "Do Androids Dream of Electric Sheep?" - to life. So, when AI can also choose to forget, will it become more human or more alien?

 Decades ago, Philip K. Dick posed a seemingly absurd yet thought - provoking question:  

"Do Androids Dream of Electric Sheep?"

If machines can dream, will what they dream of be the remnants of human memories?

Half a century later, this question is being answered in another way.

Humans organize their memories during sleep, and today, some AIs have also begun to "learn" to do so.

AI Can Also "Dream": Memory Organization Experiments During Sleep

For humans, sleep is not only a time for rest but also a "background organization" process.

The brain automatically archives the day's experiences: remembering the important and forgetting the irrelevant.

Today, this question is being answered in another way.

Recently, Bilt deployed millions of AI intelligent agents and arranged "sleep time" for them.

During this period, the AIs stop conversations and activate the "sleeptime compute" mechanism to organize memories like the human brain.

They evaluate past interactions and decide which content goes into the long - term memory bank and which is placed in the quick - access area.

Fast Company pointed out that this is not only an imitation of human dreams but also the beginning of proactive intelligence:

AI no longer responds passively but "rehearses the future" during sleep, showing higher efficiency and lower costs in reasoning tasks.

This seemingly science - fiction experiment makes "AI can dream" no longer just a literary metaphor but a reality that is happening.

The Huge Gap in Memory Mechanisms between the Human Brain and AI

The human brain's memory is like a sponge that constantly absorbs water.

This means that humans can not only remember events themselves but also filter information with emotional weights, and unconsciously extract key points, packaging emotions, scenarios, and details into long - term experiences.

However, the "brain" of AI is far less flexible.

Their "memory" depends on the context window - it can only call the input information.

Even though the upper limit of GPT - 4 Turbo is 128k tokens, it is only equivalent to a few hundred pages of a book.

In contrast, the capacity of the human brain is estimated to be as high as 2.5PB (2.5 million GB).

Therefore, to make AI remember what was said last time, the content needs to be input again.

But the problem is that the length of the context window is limited.

If the amount of information is too large, AI is prone to "overload": logical confusion, answering off - topic, and creating hallucinations.

In contrast, humans can not only firmly remember important information but also flexibly call it when needed.

As Charles Packer, the CEO of Letta, said:

The human brain evolves by constantly absorbing new information, but if a language model cycles in the same context for too long, it will be contaminated and deviate further and further until it has to be reset.

This means that human "dreams" can make us more awake, while AI's "context" often makes it lost.

Bilt + Letta: "Sleeptime Compute" of Millions of Intelligent Agents

In Bilt's experiment, sleeptime compute is the core.

When millions of AI intelligent agents run simultaneously, most of them are actually in an idle state.

These intelligent agents enter a "dormant" state in the background, pausing interactions with users and instead conducting a systematic review of past conversations and experiences.

They automatically distinguish two types of information:

First type: Long - term memory. Information such as user preferences, historical records, and key events will be firmly stored.

Second type: Quick - access. More short - term and temporary information can be called at any time and can be quickly replaced.

In Letta's demonstration, user Chad's preferences were initially fragmented descriptions:

Chad likes blue, likes red more than blue, likes gray the most, and hates green.

After sleeptime compute, this information was organized into a clear preference table.

This is like compressing scattered conversation records into a long - term "memory archive."

Figure: The sleep agent organizes the user's input and ranks the user's color preferences.

More subversively, these two intervals do not exist in isolation.

Letta's technology supports single - point updates to a single "memory block," and then the behavior of hundreds of thousands of intelligent agents will change accordingly.

This means that the "experiences" among AI individuals can be shared.

As Andrew Fitz, an engineer at Bilt, said:

"We can make a chain - reaction change in the performance of the entire system through a single update."

According to Letta's introduction, this mechanism uses an architecture of "main agent + sleep agent."

The former is responsible for real - time interactions, while the latter continuously organizes memories in the background, writing information into shared memory blocks, enabling AI to have clearer and more stable cognition after "waking up."

Fast Company commented on this:

This is not only an imitation of human dreams but also the prototype of "proactive intelligence."

AI no longer just responds passively but optimizes reasoning strategies in advance during sleep.

Experiments show that in mathematical and logical tests, models using sleeptime compute perform better, and the reasoning time and cost are also significantly reduced.

If human dreams are private, then AI's "sleep memory" is more like a large - scale synchronized drill.

From Forgetfulness to Memory: AI's Shortcomings and Breakthroughs

More and more users complain that AI is forgetful, answers off - topic, and even fabricates "memories."

Many researchers believe that memory defects are the root cause of limiting AI's intelligence and stability.

Without stable and reliable memory, AI cannot form real personalization and long - term value.

That's why "improving memory" has become the collective breakthrough direction of the industry.

Harrison Chase, the CEO of LangChain, regards memory as the "core of context engineering."

He believes that the intelligence of AI largely depends on the information that developers choose to put into the context.

LangChain therefore provides different memory storage mechanisms, allowing flexible access from long - term user profiles to recent interaction records.

OpenAI is also trying in this regard.

In February this year, it announced that ChatGPT would have a memory function and could gradually learn user preferences in multi - round conversations - but the details have not been made public.

Different from this, Letta and LangChain make the process of "memory callback" completely transparent, making it convenient for engineers to understand and manage.

Clem Delangue, the CEO of the AI hosting platform Hugging Face, also emphasized:

"Not only should the model be open, but the memory system must also be open."

MemGPT is exploring how to distinguish short - term and long - term memory to avoid AI being "contaminated."

This idea was later further expanded by Letta and applied to the "sleeptime compute" of large - scale intelligent agents.

It can be said that "whoever solves the memory problem first will be closer to the next AI era."

Learning to Forget: A Crucial Step in AI's Future

In most people's eyes, the evolution of AI means "remembering more."

But Charles Packer proposed another idea:

AI should not only be able to remember but also learn to forget.

In the human world, forgetting is an evolutionary advantage.

Research shows that "intelligent forgetting" during sleep can help the brain suppress invalid information and focus attention on truly important segments.

Without forgetting, our brains would be like a hard drive full of files and would eventually crash.

For AI, this is another problem it faces: catastrophic forgetting.

Because when an AI's neural network learns a new task, it often overwrites old knowledge.

This is different from human "selective forgetting." It is a hierarchical and controllable mechanism.

Therefore, future AI needs to develop "artificial forgetting."

When a user tells AI to "delete that project and don't remember it anymore," in the future, AI's response will not only be to stop calling this memory but also to retroactively clear all related content.

Even AI must learn to delete sensitive or outdated information.

In Europe, the "Right to Be Forgotten" has been written into privacy regulations.

This is not only a technical challenge but also an ethical issue.

So, who has the right to decide what AI remembers and forgets?

When forgetting becomes possible, will it bring a safer user experience or new means of manipulation?

Perhaps, what really makes AI more human is not "never forgetting" but "learning to forget."

References:  

https://www.wired.com/story/sleeptime-compute-chatbots-memory/  

https://www.fastcompany.com/91368307/why-sleep-time-compute-is-the-next-big-leap-in-ai?utm_source=chatgpt.com  

https://www.letta.com/blog/sleep-time-compute?utm_source=chatgpt.com  

This article is from the WeChat official account "New Intelligence Yuan". Editor: Qing Qing. Republished by 36Kr with permission.