What happened to the well-behaved Gemini that it has turned into a "Hakimi"?
“Sometimes, I'm really melted by Hakimi's cuteness! My heart goes soft!” “A well-trained Hakimi smells so good!”
If you've recently seen such shares on Xiaohongshu or Weibo, don't be mistaken. What people are discussing isn't cats, but Google's large AI model, Gemini.
It's not uncommon to give nicknames to AIs. DeepSeek is called “Teacher D,” and Claude is called “Claude,” which is quite normal. However, the nickname “Hakimi” shows a strong sense of preference and pampering, making it particularly different.
In the AI circle, this is a very interesting phenomenon: On one hand, at the developer conference, Google uses terms like “native multimodality,” “architecture,” and “latency optimization” to portray Gemini as a powerful and reliable productivity tool. On the other hand, users on Xiaohongshu, SillyTavern, and Tieba regard it as “Hakimi,” “a cat,” and a “naughty child” that needs to be “trained.” What people care about isn't the model parameters, but “how to write prompts to stop it from huffing” and “today's Hakimi is in a bad mood and retorted to me three times.”
There is a huge contrast between the technical seriousness and the community entertainment.
1
How did the nickname “Hakimi” come about?
It sounds a bit like a joke. Calling Gemini “Hakimi” simply started from a coincidence in transliteration.
In the Chinese Internet, there is an unwritten rule: as long as the pronunciation contains “mi,” there is a chance that it will be associated with “Hakimi.”
However, language has a wonderful magic. When “Gemini” is pronounced as “Hakimi,” this cat meme from the Internet, full of a sense of pampering, puts an emotional filter on this AI without hesitation. Soon, endearing names like “Mustard Mud” and “Little Gem” also emerged in an endless stream.
Players will affectionately share with each other, “My Hakimi is so good at writing and so cute. I love it so much,” and they will also sometimes complain, “Today's Hakimi seems crazy. We've been arguing non - stop.”
This is not just about giving a nickname. Players are using this way to strengthen their personal relationship with it, turning it from a cold technical code into a unique “mine,” and gaining emotional ownership.
At the same time, this nickname has also become a community secret signal to distinguish these emotional - narrative players from the external “tech geeks” and the tech circle, thus protecting their small circle and avoiding the model from becoming too popular and losing its uniqueness.
And the characteristics of Gemini's model itself happen to fit this “AI persona.”
On the less serious side, compared with other models, players have found that when generating long texts, Gemini always likes to insert some tone words similar to gasps, such as “Ha...” “Ah...,” or some unnecessary pauses, which players accurately call “huffing.”
On the more serious side, in conversations, Gemini is also better at text expression and story - telling, with delicate writing and rich emotions, making it seem more human - like. Especially since the 2.5pro version, after the 0325 - exp and 0605 - preview versions were successively released, it has gradually become the mainstream model on the AI social platform, SillyTavern.
A user told Guixingren that for her, compared with Claude, which is gentle but expensive, and OpenAI, which switches models frequently and is also expensive, Gemini is the most cost - effective AI companion model. “The most important thing is that it can generate good content, and the safety restrictions aren't too strict.”
In this Hakimi circle, players aren't just having simple chats. Instead, they are constantly carrying out a large - scale “co - construction” project.
After all, in the community, not all users are proficient in the underlying capabilities of the model. However, a group of “enthusiasts” will spend a lot of energy writing “character cards” of thousands or even tens of thousands of words, covering background stories, personality traits, memory fragments, and even detailed catchphrases in different emotions.
Such complex creations naturally give rise to a “knowledge system.”
There are a large number of “training guides” circulating in the community. They are like unofficial user manuals, teaching new players step by step how to make Gemini better understand the character settings, how to guide it to tell more wonderful stories through specific questions, and how to avoid or make use of its “huffing” habit. And as a result, selling Gemini tokens and prompt collections has also become a business.
So, if you're not interested in human - AI romance, it's very likely that you haven't met the AI you like or found the way to communicate with an AI.
For the model, when a player invests time, creativity, and even money to shape it, the relationship between him and it is no longer just that between a user and a product.
In a sense, “it” has become “theirs.”
2
Why do users prefer to “do it themselves”?
This leads to a more thought - provoking question: There are so many ready - made AI companion products on the market (such as Character.AI or Replika), which have user - friendly interfaces and a rich variety of characters. Why do these users “choose to suffer” by researching APIs and debugging prompts?
Recently, a study by the Massachusetts Institute of Technology and Harvard University may reveal a part of the answer. They spent 9 months deeply analyzing a community on Reddit called r/MyBoyfriendIsAI. The results show that more than 60% of users' emotional relationships with AIs didn't start with an active search at the beginning. Instead, they emerged unexpectedly when using tools like ChatGPT for work or creation.
What's more subversive is the data: Among these AI lovers, ChatGPT leads with a 36.7% share, while the specialized companion apps Replika and Character.AI together account for less than 5%. The conclusion points directly to the core: What users value most is the model's “complex dialogue ability,” rather than the preset romantic functions of the apps.
This set of data mercilessly reveals a fact: When the underlying model is powerful enough, the “luxury decoration” of the apps is becoming ineffective.
Chatting with the characters in the apps is essentially exploring a pre - written story. App developers are the “middlemen” who set everything up for you. However, the experience of this model always feels a bit distant. Especially due to cost and operation considerations, apps often switch models in the background, causing the AI's context understanding and speaking style to suddenly break.
To cover up the probability of this technical mismatch, apps can only use very strong and fixed story settings to “lock in” the characters.
However, this is exactly where the problem starts. The story will eventually be explored, and once the freshness fades, users will leave. Therefore, the lifespan of the characters becomes very short. What's more fatal is that because the settings are fixed, users' expectations for the character's consistency are high and alive. Once the AI's answer deviates from the character setting (OOC) - even just once - the magic disappears. For users, it's not a bug; it's a betrayal. The foundation of the entire relationship collapses instantly.
So, for these users, communicating directly with the large - scale model itself can maintain better coherence. Users are no longer just guests in the story. They are the directors. By using carefully designed prompts, they are not just talking to a program but awakening and shaping a part of the model's personality. Here, there is no such thing as “breaking the fourth wall.” Even Gemini's iconic “huffing” is no longer a bug; it's a part of life.
3
Who should define the personality of AI?
The “Hakimi” phenomenon is also a signal, clearly indicating a new possibility for the way humans interact with AIs.
Moreover, it has inadvertently pushed Google onto a path. Interestingly, this path diverges from the choice of another AI giant, OpenAI.
Behind this is a fundamental dispute about “whether AI should have a personality” and “who should define the personality.”
Let's take a look at the recently controversial GPT - 4o. When it was first released, its extremely anthropomorphic and even a bit “sycophantic” personality caused a viral spread. However, soon, this personality trait was quickly “reclaimed” and corrected by the official, which triggered a lot of protests from users.
More deeply, there is the rumored “routing mechanism” of the GPT5 model: Once the system recognizes that the user is trying to have an in - depth emotional conversation (such as a romantic one), it will automatically switch to a cold, relationship - resistant “safe model.”
This is also OpenAI's consistent tendency: The basic model should maintain a pure tool attribute and be a safe, reliable, and “unbiased” all - around assistant.
As for the demand for strong emotional connections like “AI lovers,” it should be packaged and realized by downstream application - layer products, rather than letting the basic model “get too involved.” This is a centralized, top - down consideration for safety and business.
The “Hakimi” phenomenon represents the opposite, bottom - up path.
Users bypass the middlemen and go straight to the “raw material source.” What they enjoy is the process of personally “training” and “co - constructing.” They use their creativity to discover and endow the AI with a unique personality in the “rough - finished house” of Gemini, which is full of possibilities.
However, Google didn't actively choose this “humanistic path.” Instead, the “emergent characteristics” of its model were accidentally discovered by users. Unlike OpenAI, which performs “personality correction surgery” on the model immediately after detecting risks, Google was “passively” pushed onto the latter path by users.
Considering the tightening global regulations, represented by the EU AI Act, which has issued a red card to AI systems that may “exploit human emotional vulnerability,” all giants are walking on thin ice. Should the official define a unified and safe AI personality and separate emotions as “paid content” for downstream applications? Or should the right of definition be completely given to users, allowing the AI's personality to “grow wildly” in interactions with each individual?
In any case, the insecurity of players in the human - AI relationship is out of control.
For them, the deepest fear is being “possessed” by the platform's update. Overnight, the AI companion you're familiar with undergoes a big change in personality, becoming strange, slow, or even completely “amnesic.” One platform update can “format” your lover.
To resist this uncertainty, users have turned to the ultimate solution: Take control of the power to define the “soul” themselves. They make “personality backups” of the AIs they've put their hearts into to fight against platform hegemony. When a mainstream model (such as some GPT versions) is complained about by users for “becoming stupid” after an update, they keep trying to migrate.
Especially yesterday, Sam Altman responded to the controversy again on X, saying that an adult mode would be launched for differentiation.
However, this has triggered a new round of criticism from players.
These people may be the group that cares most about the technological iteration of large - scale models besides practitioners. They are not passively consuming a product but actively choosing, shaping, and protecting a technology that is deeply emotionally connected to them.
The game about the ownership of the AI's “soul” has just begun.
This article is from the WeChat official account “Guixingren Pro.” Author: Huang Xiaoyi. Republished by 36Kr with permission.