Claude can "dream", and it's still competing even in its dreams.
Claude has started to dream.
Sometimes, people can't figure something out during the day, but suddenly understand it after a good sleep.
Now, Claude has also learned this trick. Anthropic's Claude Managed Agents has launched a new feature, Dreaming ——
It allows the AI to "sleep" and reflect during work breaks, clean up memories, summarize patterns, and even upgrade itself.
To put it simply, it's the AI's sleep organization technique.
Now we finally know what the Dreaming function in the leaked source code of Claude Code is (doge).
What exactly is Claude doing in its "dreams"
When we chat with the AI, it stores content in its memory bank every time. Over time, the memory bank becomes a jumbled mess.
Duplicated, outdated, and useless information is all piled up together, and even the AI itself can't tell which one to use.
As a result, the AI becomes slower and less accurate.
The Dreaming feature launched by Claude this time aims to solve this problem.
It is an asynchronous task that runs quietly during conversation breaks, and it involves collective reflection across agents.
Dreaming will automatically read the memory bank and up to 100 historical conversations, and then start doing three things:
First, merge duplicates and clean up noise.
Merge similar memory entries and delete useless redundant information in the memory bank.
Second, replace old content and update knowledge.
Identify invalid processes, expired rules, and no - longer - applicable preferences, and automatically replace old content with the latest information.
Third, conduct cross - analysis and dig out patterns.
An individual agent can't see much from its own experiences, but by comparing the histories of multiple agents, hidden patterns that a single AI can't discover can be unearthed.
For example, recurring errors, the optimal workflow that multiple agents eventually converge on, and the unified preferences and habits of the entire team.
The design of Dreaming is also relatively safe and controllable. It doesn't modify the original memory data. All the organized and optimized results will be output to a brand - new memory bank.
That is to say, if you're not satisfied with the "dream", you can simply delete this new bank without affecting the original data.
Anthropic official said:
Memory is about remembering and learning things on the spot during work;
Dreaming is about figuring out what these experiences mean during work breaks.
One is immediate learning, and the other is in - depth reflection. This is almost the same as the logic of the human brain automatically organizing daytime memories, precipitating experiences, and strengthening skills during sleep.
Currently, Dreaming is still in the research preview stage, but some companies have already given it a try.
After the legal technology company Harvey integrated Dreaming, it said that the completion rate of drafting long - form legal documents increased by about six times; the writing tool Spiral uses Dreaming to remember users' personal style preferences, and with multi - agent collaboration, the content it writes becomes more and more to the users' taste.
Three features launched together
Of course, this update of Claude Managed Agents isn't just about dreaming.
In addition to Dreaming, there are also Outcomes and the multi - agent orchestration function, and these two have entered the public testing stage.
Outcomes can be regarded as the AI's self - quality inspector.
The idea is very simple: You first write a scoring standard. After the AI finishes its work, an independent Grader Agent scores it against the standard in an isolated context window.
This scoring AI isn't affected by the executing AI. If it doesn't meet the standard, it will point out the problem, and the executing AI will automatically make modifications until it is qualified.
Internal test data shows that Outcomes can increase the task success rate by up to 10 percentage points, improve the quality of docx file generation by 8.4%, and improve pptx by 10.1%. It is particularly effective for tasks with high - detail requirements and strong subjective standards.
Multi - agent orchestration allows AIs to work in groups. A Lead Agent acts as the team leader, splitting complex tasks into several parts and assigning them to different Specialist Agents for parallel processing.
Each Specialist can be a different model, different prompt, or different toolset. The contexts are isolated from each other but share the file system.
The Lead Agent can always find the previously - used agents to continue the conversation, and the other agents still remember what they did last time.
Netflix has already put it into practice. The platform engineering team used it to analyze the logs of hundreds of builds in parallel. Multiple agents scanned their respective batches, and finally only the recurring problem patterns emerged, filtering out all the one - time noise.
These three features together actually solve the same problem ——
Enable the AI to independently complete complex work without human supervision.
Coupled with the cooperation with SpaceX to obtain all the computing resources of the Colossus 1 data center and the doubling of the Claude Code call limit for Pro and Max users... It can be seen that Anthropic is building a whole set of infrastructure for AI autonomous work.
The founder Dario Amodei also made a prediction at the Code with Claude conference:
The first company valued at $1 billion and operated by one person + AI will be born in 2026.
In the future, it's no longer a dream for one person to build a $1 - billion company. Perhaps this $1 - billion company should start with Claude having a good dream...
Reference links:
[1]https://x.com/claudeai/status/2052067399088664981[2]https://claude.com/blog/new-in-claude-managed-agents
This article is from the WeChat official account "QbitAI". Author: Focusing on cutting - edge technology. Republished by 36Kr with authorization.