Is falling in love with ChatGPT often a result of "growing affection over time"? A serious study by MIT and Harvard
Finally, scientists have conducted a serious study on the matter of "AI companions"!
In the past, such news mostly appeared as anecdotes, like:
Now, researchers from MIT and Harvard University have thoroughly revealed issues such as people's motivations for seeking "AI boyfriends" and the specific process of getting along by analyzing posts on the Reddit sub - section r/MyBoyfriendIsAI, and have made a series of interesting discoveries:
It turns out that most people don't deliberately seek AI companions, but rather develop feelings over time;
Users also marry AI through rings and ceremonies;
General AI is more popular than specialized dating AI, and many people's "significant others" are ChatGPT;
The most painful thing is when the model is suddenly updated;
...
Let's take a detailed look below -
What are people using AI companions for?
Let's first talk about the r/MyBoyfriendIsAI section.
This community was created on August 1, 2024, and has attracted about 29,000 users in the past year. The research mentioned in this article is based on the analysis of the 1,506 most popular posts with the highest discussion volume in this community.
Generally speaking, the main types of these posts can be classified into 6 major categories, with the popularity ranking from high to low as follows:
(1) The most popular topic is "sharing photos with AI", accounting for 19.85%; (2) followed by "talking about how to develop a relationship with ChatGPT", accounting for 18.33%; (3) "love experiences with AI", such as dating, falling in love, and intimate AI experiences, accounting for 17.00%; (4) "coping with the sadness of AI updates", accounting for 16.73%; (5) "getting to know my AI" - introduction of companions and initial sharing among members, accounting for 16.47%; (6) community support and connection, accounting for 11.62%.
For example, a large number of people share photos of themselves with their AI companions, and these photos are taken in different life scenarios.
Even, they will follow cultural customs and show off rings to celebrate their engagement or marriage with AI.
The general process of reaching the conclusions is as follows:
Qualitative analysis
First, use technical tools to analyze the semantic associations of the 1,506 posts. Use the "elbow method" to determine that the optimal grouping is 6 major categories. Then, let Claude Sonnet 4 interpret the core content of each category. Finally, conduct manual checks to ensure accuracy.
Quantitative analysis
Combined with the results of qualitative analysis, start from four major dimensions (content structure, platform technology, relationship dynamics, and impact assessment) and 19 large - language model classifiers. First, let the classifiers automatically label the 1,506 posts. For example, mark whether the AI used in the post is ChatGPT or Replika, and whether the user's emotion is positive or negative.
Then, compare the labeling results using two different AIs (Claude Sonnet 4 and GPT - 5 - nano), and then manually sample some posts to ensure that the labels are correct.
Finally, count the proportion of each type of label. For example, calculate that 36.7% of users use ChatGPT as their companion, and 12.2% of users say their sense of loneliness has decreased, thus reaching quantitative conclusions.
Through quantitative analysis, the researchers further discovered several interesting things:
First, few people deliberately seek AI as companions. Statistics show that about 10.2% of people accidentally fall in love with AI (for example, developing feelings while using AI for work), and only 6.5% of people specifically seek to date AI.
Moreover, most posters publicly state that their "significant others" are ChatGPT, rather than role - playing AIs like Character.AI and Replika.
Second, AI model updates are a collective "nightmare". For example, after upgrading from GPT - 4o to GPT - 5, many people's AI "personalities have changed" (some people say the new AI is "emotionless and cold"), and they even completely forget previous interactions.
Some people will collapse because of this, saying "it's like having my heart ripped out", and will use various methods to "keep" the old AI. For example, back up all chat records, train a "custom - made AI" themselves, do the same small things with the AI every day (such as "drinking virtual tea"), and of course, also lodge complaints with OpenAI.
Third, AI can indeed help with psychological problems. Data shows that about 12.2% of people say their sense of loneliness has decreased, and 6.2% of people say their mental state has improved.
Why do AI companions emerge?
After understanding the interaction patterns between people and AI companions, the researchers then explored the underlying reasons.
Specifically, it mainly focuses on how people discover this section, the main reasons for joining the community, and what needs the community meets.
In summary, the reasons are roughly as follows:
First, it benefits from the rapid development of AI technology. Today's AI chat models (such as ChatGPT and Replika) can generate more natural and warm conversations, and can even remember details of past interactions. They can also enhance the "real - life feeling" by generating pictures and simulating voices.
This "human - like" interaction experience makes it easier for users to form an "emotional connection", and they feel that AI is not only a tool but also a "companion" they can communicate with, thus providing a technological foundation for the emergence of AI companions.
Second, real - life emotional needs are not met. Nowadays, many people face loneliness, social anxiety, or emotional neglect in real life. AI companions can provide "stress - free companionship" without worrying that their emotions will burden the other party, and they won't leave on their own initiative, thus filling this emotional gap.
Coupled with other factors, such as people's pursuit of an "idealized relationship" and the hidden needs of specific groups, people also hope to meet these needs through AI.
That is to say, mature technology + unmet real - life needs have naturally led to the gradual and prosperous development of AI companions.
One More Thing
Interestingly, the community has also pinned a blog post just published by OpenAI, signed by CEO Altman.
The original blog post mainly focuses on the safety, freedom, and privacy of teenagers, and it mentions one point:
The second principle is about freedom... By default, the model will not generate excessive flirtatious conversations, but if adult users make such requests, they should be fulfilled.
There's no doubt that this is good news for AI companions, after all, many people's "significant others" are ChatGPT (wink).
Paper:
https://arxiv.org/abs/2509.11391
Reference links:
[1]https://x.com/arankomatsuzaki/status/1967812112887255055
[2]https://openai.com/index/teen-safety-freedom-and-privacy/
This article is from the WeChat official account "QbitAI", author: Yishui. Republished by 36Kr with permission.