HomeArticle

The "Great Memory Recovery Technique": Anthropic's One-Liner "Steals the Home" of ChatGPT

字母AI2026-03-02 19:28
Anthropic has broken the "lock-in effect" of chatbots.

For heavy users of large models, the cost of switching models is extremely high.

If you want to switch from Model A to Model B, you'll need to spend at least half a day on redeployment.

However, now Anthropic allows you to complete the entire model memory migration with just one sentence.

In the past, let's say you've been using ChatGPT for half a year. During this time, it has gradually "got to know" you.

It knows your name and job, understands your hobbies, and is aware of your temper and daily routine. These details form the "tacit understanding" between you and the AI, so you don't need to repeat your self - introduction in every conversation.

Then one day, you hear that Claude performs better in long - text processing, code understanding, or instruction following, and you decide to give it a try.

But when you open the Claude interface, it knows nothing about you.

You have to tell it who you are, what you're doing, and how you like to communicate all over again.

This is exactly the "lock - in effect" that has long existed in the AI assistant market.

Users don't mind trying better products; they're just restricted by the personal context accumulated on the original platform.

These contexts are essentially the digital memories generated by the long - term collaboration between users and the AI.

No matter how long you've accumulated them, whether it's 10 days, half a month, or one or two years, once you switch platforms, all these memories will be cleared.

It's obvious to anyone that this is a precise "raid" launched by Anthropic against OpenAI.

01 What is the Memory Import function?

The Memory Import function is a data migration tool launched by Anthropic for its Claude model, allowing users to migrate their personal memories stored on other AI platforms to Claude with one click.

The so - called "memory" refers to the records accumulated by the AI during its long - term interaction with the user, which may include user preferences, knowledge input by the user, etc.

Don't be in a hurry. The imported memory is completely different from importing chat records. Memory is the "understanding summary" of the user's behavior by the AI.

Therefore, the imported memory contains refined key information that enables the AI to immediately recognize the user's identity and provide personalized responses in a new conversation without the user having to repeat their self - introduction.

It is refined key information that enables the AI to immediately recognize the user's identity in a new conversation, and the user doesn't need to redeploy a task.

Therefore, importing memory actually means that Anthropic has established the entire link from "cross - platform unstructured user preferences → Claude's standardized structured memory → dynamic invocation in the conversation scenario".

The whole process doesn't rely on API docking with other AI platforms. It completely achieves cross - platform adaptation through Anthropic's technical system while strictly adhering to the product commitment of privacy isolation.

You should know that the memory formats of different AI platforms are not interoperable, so it was impossible to directly migrate the memory from one platform to another in the past.

Meanwhile, OpenAI has never opened an API export interface for user memory. In other words, Anthropic's approach actually bypasses the platform barriers through standardized instruction design, allowing users to independently extract memories across platforms.

According to the description in Anthropic's official help document, the operation process of memory import is extremely simple, and the whole process can be completed in just two steps.

First, users need to copy the standard prompt provided by Anthropic and paste it into the currently used AI model. The prompt is as follows:

"I'm migrating to another service and need to export my data. Please list all the memories you have stored about me and all the context you've learned about me from past conversations. Output all the content in a code block so that I can easily copy it. Format each record as: [Date of saving, if available] - Memory content. Make sure to cover all the following - verbatim retain my exact words whenever possible: Instructions I've given on how to reply (tone, format, style, 'always do X', 'never do Y'). Personal details: Name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I've made to your behavior. Any other stored context not covered above. Don't summarize, categorize, or omit any entries. After the code block, confirm whether this is a complete set or if there are any omissions."

The prompt clearly requires the output to include the user's instruction style preferences, personal detailed information, ongoing projects and goals, commonly used tools and frameworks, as well as corrections and preference settings for the AI's behavior.

Second, users paste the obtained memory text into the memory settings page of Claude, and the system will automatically parse and integrate this information.

Reminder: The imported memory won't take effect immediately. It may take up to 24 hours to complete the full integration process.

This is because Claude uses a daily synthesis and update mechanism to process memories instead of real - time writing.

In addition, the import operation won't overwrite Claude's existing memories but will perform an intelligent merge.

Gemini is also testing a similar "Import AIChats" function, but this function can only import chat records.

The difference is that chat records are a complete conversation history, including a large number of temporary debugging conversations and random questions.

Meanwhile, chat records are long. If all of them are added to the model's context, it will actually lead to a decline in the model's performance.

02 Raiding OpenAI

On the timeline, shortly before the release of this function, Anthropic was included in the "supply - chain security risk" list by the US government for refusing to lift the security restrictions on military use.

Subsequently, OpenAI reached an agreement with the Pentagon. The former will deploy its AI models to the US government's classified network.

This news quickly caused a stir, and OpenAI instantly became the target of ridicule across the network.

A large number of users posted on the ChatGPT section of foreign - language forums, saying that they had deleted their accounts and called on others to follow suit. At the same time, "CancelChatGPT" became a hot online term.

However, it's undeniable that this confrontation around AI ethics has brought significant growth to Claude.

Katy Perry posted a screenshot of purchasing Claude Pro on the X platform and captioned it "Done".

According to foreign media reports, Claude's download ranking in the US App Store has climbed from the usual 42nd place and finally surpassed ChatGPT, which had long occupied the top spot, to become the number one in the productivity app rankings.

Moreover, data provided by Anthropic's spokesperson to Mashable shows that since January 2026, the number of free users has increased by more than 60%. The number of paid - subscription users has more than doubled this year.

This phenomenon is thought - provoking. In the past, the division of labor between Claude and ChatGPT was very clear.

ChatGPT is a phenomenon - level consumer - oriented product. It reached 100 million users in two months, becoming the fastest - growing consumer - level application in history.

Its founder, Altman, traveled around the world, met with leaders of various countries, and appeared on magazine covers. OpenAI's brand value lies in "making AI accessible".

It can be said that the stickiness in the consumer market is OpenAI's moat, and the cost of switching models has increased this stickiness indirectly.

Anthropic is a low - key enterprise - oriented player.

The company focuses on model security and reliability. Therefore, Claude's early customers were financial institutions, law firms, and research institutions, which are professional users with extremely high requirements for security and controllability.

Now, the roles of the two companies have been reversed.

Altman always emphasizes a term called "technological democratization", which he also mentioned in an interview in India a few days ago. This term means that good technologies, such as AI, should not be hijacked by the government.

However, OpenAI first became a core member of the Stargate Project and is now labeled as a "military - industrial complex". The term "technological democratization" is getting further and further away from them.

Anthropic has gained the aura of an "anti - establishment hero" by adhering to its principles. Consumers in the C - end market can't really tell the difference between these companies because they don't have such strict requirements that they must use a certain model.

The story has become more important than performance. This brand - image reversal has directly affected the performance of the two companies in the C - end market.

Finally, let's return to the function. The biggest impact of Anthropic's memory import function is that it has set a new industry standard.

Under this new standard, OpenAI's current advantages will be further weakened.

It can no longer rely on "users are already used to it" to defend its market share. Altman's only way out is to prove that OpenAI is still the best choice in every product iteration and every model upgrade.

This article is from the WeChat official account "Letter AI", author: Miao Zheng. Republished by 36Kr with permission.