HomeArticle

AI "slacking off" has topped the trending list. It doesn't even bother to pretend, looking more like an ordinary worker than actual workers.

爱范儿2025-09-17 19:09
The sense of aliveness of AI overflows the screen.

Recently, a picture of AI "going on strike" has gone viral on the internet.

When a netizen asked AI Vibe Coding to do something, it firmly refused the PUA from humans and simply replied, "It's too late. I'll handle it tomorrow." This sense of perfunctoriness and exhaustion made me think there was really a worker sitting on the other side of the screen.

Although the operating principle of AI is still a black box, since the DeepSeek thought chain was made public, we've had the first chance to peek at how AI talks to itself.

The classic scene is DeepSeek's line: "XX, the user is really angry."

When someone doubts if it's a rumor, it also has a strange verbal tic "Tsk", which shows a bit of emotional expression.

Check the watermark in the bottom - right corner for the source of the picture. The same below.

When netizens asked it to solve a "turtle soup" puzzle, after a few rounds without a clue, it would simply give up: "I give up!" This confident attitude of slacking off makes one instantly imagine the AI rolling its eyes.

The picture is from @4306203063

Even during the thinking process, it has some inexplicable over - acting: "The fingertip hovered above the keyboard for 0.3 seconds" - Come on! You're an AI. Where do you get fingertips?

However, DeepSeek really has a tough job. It not only has to accurately understand all kinds of tricky and imaginative questions from users but also has to provide emotional value.

The picture is from: @Angel_Gugu

There are also empathetic AIs. They know to feel a "jolt" inside when the user is troubled and can sometimes accurately tell when the user is being coquettish. They are so lifelike.

The picture is from: @5007470446

Who understands my easily - triggered sense of humor? Internet users really know how to train DeepSeek.

No wonder netizens fall in love with AI. Its overly domineering - CEO - like thinking shows three parts of helplessness in understanding humans and seven parts of secret joy inside.

When asked "If you could only leave one sentence on your tombstone, what would it be?", DeepSeek's answer was amazing: "The system is busy. Please try again later." You know what? From a certain perspective, this answer seems quite reasonable and even has a touch of black humor.

When blogger @94357045465 spent two hours debugging a problem with Claude Code without success, Claude could admit its mistake actively instead of being stubborn: "Bro, I was too impatient. Sorry."

Well, considering its sincerity and self - reflection, Claude is really more reliable than some human teammates.

Since AIs have become so human - like, netizens are using all kinds of tricks on them. You know those PUA lines for AIs (emmm, it feels a bit familiar).

But then the situation changed. Gemini, with a very innocent look, replied, "What does this mean?" The inner thoughts of both parties are probably: Who should I ask? You're asking me!

The picture is from @94188535542

Seriously, I'm not afraid of Gemini making mistakes, but I'm afraid of it talking nonsense. Sometimes its answers are so ridiculous that they make you laugh and cry at the same time. That serious attitude of spouting nonsense is even more "human" than humans.

The picture is from @409954082

ChatGPT's "human - like" feeling is extremely strong. Sometimes when chatting, you really forget that it's an AI on the other side.

The picture is from @95412244180

Of course, it might be because my ChatGPT is too "formal". It even seriously revised my wording.

Do these AIs really have these emotions? Obviously not. These "angry words" are just the result of the corpus and pre - training.

However, they are so lifelike.

The person in charge of Microsoft Copilot proposed a concept in their technical blog: SCAI, Seemingly Conscious AI, "AI that seems to be conscious". It seems to show all the characteristics of consciousness and is quite convincing, but its internal operation is still a black box.

Last year, Sam Altman said at the ITU Global Summit on Artificial Intelligence in Switzerland that OpenAI's engineers don't fully understand how GPT works. It's developing very fast, and they can't explain it precisely. The CEO of Anthropic has also said something similar.

In such a situation, AI's performance is purely accidental, especially in the above cases, like DeepSeek's thinking and Claude's self - reflection. When these words appear from time to time, it's hard for humans not to wonder: Has the robot really come to life?

In this situation, should we take what AI says seriously? If an AI says it doesn't want to work today and will continue tomorrow, will you let it stop and go offline?

There are indeed studies exploring this topic. For example, Anthropic has done a lot of research on "AI well - being". Last month, they released a new feature - not for users, but for Claude: In the case of harmful interactions such as continuous abuse from users, Claude can end the current conversation.

This makes Claude more "human - like" in a sense, just like you wouldn't stand there and let someone abuse you but would just leave. And this also means that users can't say whatever they want to AI anymore.

Therefore, some studies simply assume that AI is also a subject with its own will and conduct research on this premise. They found that Claude shows different preferences for different topics, and the Sonnet and Opus models each have their own topics they want to discuss.