HomeArticle

Did ChatGPT suddenly bring up and remind me of a sad event from a few months ago? Netizens were instantly heartbroken, saying, "It's really good at rubbing salt into the wound."

新智元2025-07-22 10:39
New-style AI-induced social embarrassment and online trauma

From medical records and taste preferences to painful past events, AI is quietly building your digital personality profile. But are you really ready to let AI remember every word you say forever? Behind AI algorithms, there is not only kindness but also social embarrassment and cruelty.

In April this year, OpenAI released the "memory" function of ChatGPT:

Since then, ChatGPT's memory function has been comprehensively upgraded. It is not only more intelligent and natural, but even free users can enjoy it. It can remember what you've said, form a personalized profile, and continuously optimize the conversation experience. However, problems have also arisen -

Are you really ready to let an AI remember you forever?

This is important: not everyone is ready to accept a chatbot that never forgets.

ChatGPT's memory function provides users with more personalized responses by utilizing the context information from previous conversations.

For example, journalist Megan Morrone once asked ChatGPT to provide a vegetarian menu without lentils. Since then, the chatbot has remembered that she doesn't like lentils.

The initial memory function was like a personal memo that you had to write down actively.

Now, it has become more "understanding" - it can even automatically record your behaviors and preferences in different conversations.

Christina Wadsworth Kaplan, the person in charge of personalization at OpenAI, told the media that the major update this year is to make "memory more natural and automatic."

She also shared a personal experience:

Once, when she was preparing to travel abroad, based on her previously uploaded health records, ChatGPT actively added one more vaccine to the recommended vaccine list.

The nurse nodded in approval after seeing it.

This is the real "AI understands you well."

New Forms of AI Social Embarrassment and Online Trauma

However, it's not as wonderful as what OpenAI promised. There are also problems behind the "memory" function.

For example, it may suddenly remind you: "Don't you not eat lentils?"

Or, it may casually mention a sad thing you said a few months ago.

Sometimes, this "long memory of AI" can make people feel creepy.

In February 2024, OpenAI first announced this function and promised at that time that ChatGPT would be restrained with sensitive content such as health information unless the user explicitly requested it.

But do you believe it? Yes, now you can directly tell it: "Remember this." Or conversely: "Don't remember this." The AI will follow your instructions.

Now, ChatGPT's "memory" function automatically records previous chat content to understand the user's preferences and background.

This is a personalized system, which involves not only privacy issues but also a lot of embarrassing things.

Megan Morrone asked ChatGPT to generate an image of herself based on the memory.

As a result, there was a wedding ring in the AI - generated portrait - but she was already extremely disappointed with marriage. ❤️‍🩹

Memory isn't always better the longer it lasts, especially when it comes from an uncontrollable machine.

Persistent memory may also make the chatbot "all - knowing," thus reducing the user's control over the large - language model (LLM).

Developer Simon Willison uploaded a photo of his dog and asked ChatGPT to add a pelican suit to it. As a result, a sign that said "Half Moon Ba" was also added to the picture.

The AI explained: "Because you mentioned this place before."

He was both angry and amused: "I don't want my hobby of dressing my dog in fancy clothes to interfere with my future serious work prompts!" 🥲

AI has permanent memory, but it forgets that life itself should involve selective forgetting.

You may think it's just a technical bug, but actually, there are two types of creepy problems hidden behind it 👇:

(1) Inadvertent Algorithmic Cruelty;

(2) Context Collapse.

Inadvertent Mistakes: Algorithmic Cruelty

About a decade ago, blogger Eric Meyer coined the term "inadvertent algorithmic cruelty."

That afternoon, sadness unexpectedly struck, and it was all thanks to a group of designers and programmers - at this moment, these creators behind Facebook might be immersed in a sense of accomplishment.

They poured their hearts into the "Year - in - Review" app, and they really had something to be proud of - countless users shared their annual highlights through it.

But the past year had been extremely difficult for him, and he didn't want to create a personal review.

On his news feed, review cards created by others kept popping up, almost all accompanied by the default caption: "It was an amazing year! Thank you for being part of it."

Just seeing the adjective "amazing" was enough to make him uncomfortable: this word had nothing to do with him.

Then, suddenly, a photo of his daughter smiling popped up on his homepage, encouraging him to create one too: "Eric, this is your year - in - review!"

It was in this year that his daughter died of cancer and was no longer alive.

Yes, this was his "year - in - review." Absolutely. It was this year, and it was the face of his daughter that he would never see again, reminding him to do a year - in - review!

The reminder was so blunt and really cruel.

He of course understood that this was not a deliberate act of harm.

This "unintentional algorithmic violence" stems from a set of codes - in most cases, it works well, reminding people to review their "amazing" year, showing selfies at parties, whales spouting water beside yachts, and the scenery of the dock outside the vacation house.

But in the same year, some people lost their loved ones, some spent a long time in the hospital, some went through divorce, unemployment, or other life crises...

Perhaps, they didn't want to look back on this year.

Showing him the face of his deceased daughter and saying "This is you this year!" beside it - anyone would feel uncomfortable in this situation.

Any person would think this was wrong.

If it were done by a real person, it would indeed be wrong. But since it comes from code, it can only be considered unfortunate. And these problems are really difficult to solve.

This is not an easy task. It's very difficult for algorithms to judge whether a photo gets countless likes because it's funny, amazing, or heartbreaking?

In essence, algorithms have no "heart" and are even "brainless." They run according to the set procedures and stop thinking once they are started.

Saying that a person is "thoughtless" is usually a kind of slight or insult. However, humans have allowed so many truly "thoughtless" algorithmic processes to recklessly invade users' lives and even backfire on themselves.

True intelligence is not just "remembering every word you say," but "understanding what your sad things are."

Context Collapse

The problem that Willison encountered is another common phenomenon in algorithmic systems, called "context collapse."

This refers to the situation where a user's data in different domains (work, family, hobbies, etc.) are mixed together, blurring the boundaries between them.

Like many academic concepts, "context collapse" is not the result of a sudden inspiration from one person, but gradually emerged through continuous exchanges and collisions.

However, many academic researchers wrote to danah boyd, asking if she had coined the term "context collapse." So she looked back at her records to figure out the issue.

danah boyd: Chief researcher at Microsoft Research and founder and chair of the Data & Society Institute. Her research keywords include: privacy, census, context, algorithms, fairness, and justice.

In 2001, she began to pursue a master's degree at MIT.

So in 2002, she wrote a master's thesis titled "Faceted Id/entity," which was deeply influenced by the ideas of Erving Goffman and Joshua Meyrowitz.

In that thesis, she spent an entire chapter repeatedly discussing "collapsed contexts," although she didn't systematically define the term at that time.

The whole thesis was actually about how to construct and manage identity in different contexts.

Thesis link: https://www.danah.org/papers/Thesis.FacetedIdentity.pdf

She especially loved Meyrowitz's book "No Sense of Place." This book analyzes how the media affects interpersonal interactions and reveals the dilemma of how people navigate among multiple audiences. For example, the misinterpretation that occurs when a vacation photo is seen by different people.

The Chinese translation is "The Vanishing Context," which focuses on the impact of new information - flow patterns on social behavior. The