HomeArticle

Just now, Claude has achieved "permanent memory". Although it hasn't been officially launched yet, experts are already going crazy over it.

新智元2026-01-20 18:15
An Outlier in Silicon Valley, Making a Stunning Comeback

Yesterday, it was reported that Claude would have permanent memory, and today, developers have taken the lead. An extension called Smart Forking has enabled large models to have "long-term memory" for the first time, eliminating the need for re - explanation from scratch. The developer community is in an uproar: Unbelievable, it actually works!

Yesterday, a report that Claude would gain permanent memory shocked the entire AI community.

This new way of "knowledge base" memory allows Claude to automatically remember everything in its eternal "brain".

Coincidentally, just today, a developer has "intercepted" the opportunity. He has implemented an extended ability - Smart Forking detection on existing tools, enabling large models to have "inheritable long - term memory" for the first time!

This so - called Smart Forking embeds the Claude Code conversation into a vector database, allowing it to find the most relevant previous text from hundreds or thousands of historical conversations.

Therefore, you no longer need to re - explain, and Claude can directly "continue from the previous text" for development.

Once this function was released, the developer community went crazy. Everyone said one after another: "I can't believe it actually works!" "At this moment, I'm truly shocked."

It is strongly recommended that every Claude Code user add this idea directly to their workflow!

Great minds think alike, but the paths are different. While the official is still designing the form of permanent memory, developers have already started living the life of "Claude with long - term memory" ahead of time with Smart Forking.

Moreover, the rhythm of dropping one bombshell after another recently is really astonishing. In this first month of 2026, Anthropic is truly the new GOAT in Silicon Valley, with unparalleled influence in the developer community.

Enable large models to "have" long - term memory

Compared with the official knowledge base reported about Anthropic yesterday, today's function not only has a demonstration but also technical details.

Have you ever encountered this situation: You want to add a new function to an existing project but don't want to re - explain the background from scratch at all?

Is it possible to utilize the knowledge accumulated in hundreds or thousands of Claude Code conversations?

After all, the more effective context a conversation contains, the better Claude can meet your requirements. How can we prevent these precious context information from being wasted?

This developer came up with a solution - smart forking!

You just need to call the /fork - detect command and tell it what you want to do now. Claude will send your requirements into the embedding model; then, it will match with a RAG vector database containing all your historical chat records (moreover, this database will be automatically updated with your new conversations).

Then, it will return the top 5 historical conversations most relevant to your current requirements and assign a relevance score to each, sorting them from high to low.

You just need to select the most suitable conversation, and it will directly give you a fork command. Copy and paste it into a new terminal.

In this way, you can seamlessly continue development in the most suitable context. Function implementation has never been smoother!

Actual test experience: 100% success rate

It seems that, in essence, Smart Forking adds a "memory system" to large models.

Of course, to be more precise, Smart Forking doesn't change the model's memory mechanism. Instead, it turns historical context into an external long - term memory through vector retrieval.

However, from the user experience perspective, you don't need to repeat input or recall things yourself. The model can "remember" what you did months ago, which already meets all the intuitive definitions of "memory" for humans.

So, it can be said that it gives Claude "permanent memory".

Compared with ordinary prompts, what's the success rate? In which usage scenarios is this method applicable?

Zac, a netizen who shared this method, introduced that his personal success rate is 100%.

Some people questioned: What's the advantage of this function over skills?

The key lies in that Smart forking solves the biggest pain point in current LLM conversations - context loss. In this way, correct conversations can be automatically generated.

This is another surprise brought by AI - from now on, the wisdom in each AI conversation can be retained; it's twice the result with half the effort, which is exactly what "lazy people" want!

Wild developers vs. the official: Which is stronger?

So, which is better, this Smart Forking or the legendary "permanent brain" knowledge base that Anthropic's official is going to build?

It can be seen that the principle of the "knowledge base" that Anthropic is going to build in the report is to classify and store information in different "memory notebooks", allowing Claude to actively search these knowledge bases, retrieve relevant backgrounds, and supplement new preferences, decisions, and experiences.

Therefore, from the design concept, this knowledge base is more like a top - down "structured long - term memory" - Anthropic's official defines the rules, and users choose which knowledge base to use according to the scenario, making Claude clearer about what you're doing.

On the contrary, Smart Forking is a bottom - up "context inheritance".

It doesn't rely on the official memory system. Instead, it directly and automatically finds the most relevant one from your past real - life Claude Code conversations and then fully inherits that context to continue working.

The former organizes the memory and then feeds it to the model; the latter directly finds the most memorable part of the memory.

Interestingly, these two routes don't conflict and may even be integrated in the future.

The reason is that the knowledge base solves the problem of "long - term, stable, and reusable memory", while Smart Forking solves the problem of "working memory with strong context and strong timing". One is Claude's long - term memory, and the other is closer to human episodic memory.

This simultaneous attempt by the official and individual developers may have revealed such a fact: The dividing line of the next - generation AI is not the parameter scale but the way of organizing memory.

And this new method is just the latest example of the recent Claude craze.

Claude Code breaks the circle, and programmers exclaim "It's terrifying!"

I believe you must have felt the successive impacts of Claude on the world recently.

The Wall Street Journal described it like this:

Anthropic's Claude is sweeping through the AI field like a thunderbolt, successfully breaking the circle and captivating the whole public!

Developers and amateurs are all exclaiming: The viral spread of Claude Code is comparable to the epic moment when generative AI emerged!

Not only in the United States, but Claude Code has also caused an incredible sensation in the UK.

Americans call this phenomenon "Claude - pilled".

This term refers to software engineers, executives, and investors entrusting their work to Claude and then witnessing an unforgettable moment in their lives:

Even in an era when powerful AI tools are everywhere, they can still witness the amazing abilities of this thinking machine.

Many programmers were completely "hooked on Claude" during their holidays and frantically tested the capabilities of Anthropic's latest model, Claude Opus 4.5; they are all obsessed with Claude Code, a desktop programming tool.

Technology companies have included code AI in their workflows for many years. In the past, AI was often compared with junior software engineers.

But this latest version of Claude is truly remarkable and extraordinary.

Malte Ubl, the chief technology officer of cloud - computing giant Vercel, said that he completed a complex project in a week. Without AI, it would probably take him a year.

During his vacation, Ubl spent 10 hours a day developing new software. He said that every time he ran it and saw the results, he felt a rush of endorphins, similar to playing slot machines in Las Vegas.

This month, the Claude craze has become a raging fire and has broken the circle.

Many people went to social media to share the process of making their first software in their lives without ever learning programming.

The Daily Telegraph, one of the UK's best - selling newspapers, reported the story of Ben Guerin.

To make people aware of the outrageous "business rates" of local bars, he launched a relevant website after six hours of using ClaudeCode. Within 24 hours, the number of visitors exceeded 100,000.