HomeArticle

You can't stop swiping short videos because you "don't know what to do".

新周刊2026-04-05 16:40
In the spring of 2026, a lobster stirred up the entire Internet.

In the spring of 2026, a lobster stirred up the entire Internet. From its viral spread on social media, the enthusiasm of office workers for installing and experiencing it, to the successive bans issued by universities and securities firms, it only took a few weeks. This AI application, regarded as the "new favorite", captured the most traffic in the AI track since the beginning of the year.

Some people are addicted to its "understanding of you", while others are vigilant about what it is stealing. These two emotions constitute the two sides of the current relationship between humans and algorithms.

Hu Yong is a professor at the School of Journalism and Communication of Peking University and one of the earliest scholars in China to study the Internet. In his eyes, from the earlier metaverse to today's large models, and then to the recent "lobster phenomenon", some landmark events of Internet technology always break out intensively from November to March of the following year: "There will be big words like'revolution', 'disruption', and 'destruction' in society, disturbing everyone."

This time, the lobster's popularity and subsequent controversy happened to fall in March - a spring when people should have calmed down. What kind of anxiety is revealed beyond the hot topic?

At the 10th Shenzhen-Hong Kong Bi-City Biennale of Urbanism\Architecture (Shenzhen), Hu Yong's installation art work resembles "a person trapped in the earth". This "person" looks like Rodin's "The Thinker", but it is actually made up of stacked chips. "It is pretending to think, but actually not thinking at all."

Hu Yong's first cross - border installation art work, "The Mirror of Human Entrapment". (Photo / TAL+ Bai Yu)

Hu Yong named his work "The Mirror of Human Entrapment", with the "mirror" meaning the mirror. He believes that in essence, human beings' inventions such as the metaverse and large models are all about having a dialogue with themselves. We are addicted to the mirror world, just as Shakespeare said in "Hamlet": "O God, I could be bounded in a nutshell and count myself a king of infinite space, were it not that I have bad dreams."

Behind the installation is his solid research and daily observation. In interviews with Douyin users, Hu Yong found that many people watch short videos not because they "want to do something", but because they "don't want to do something" or "don't know what to do". He calls the characteristics of this viewing situation "active passivity". People could use short - video applications to search for information or engage in social interactions, but they actively hand over the content selection right to the algorithm.

From fortune - telling and metaphysics to general knowledge of life, large language models can provide "instant" answers. Hu Yong found that when users' "personal preferences" are fed to the algorithm and become "public domain resources", intertwined with the platform's commercial interests, robots are becoming more and more "understanding" of people, but the privacy crisis also follows.

It's hard to imagine that Hu Yong, who now locks the "person" in the earth and empties the soul in his works, was a "technological optimist" 30 years ago. In 1996, he translated the American computer scientist Nicholas Negroponte's "Being Digital" into Chinese. This book predicted the sweet future of equality brought by technology and the connection of the global village, and was once placed on the desks of Chinese Internet entrepreneurs.

Hu Yong was once very optimistic about technology, but today, he doesn't think so anymore. (Photo / Provided by the interviewee)

But in 2016, in the preface of the 20th - anniversary commemorative edition of "Being Digital", Negroponte wrote: "Nationalism is on the rise, regulations are escalating, and the gap between the rich and the poor is widening... Globalization has become localization. Although the intellectual, economic, and electronic backbone facilities have achieved rapid growth, ubiquitous digitalization has not brought about world harmony."

In this context, in Hu Yong's view, the entire field of communication studies is gradually shifting from researching isolated media content or technological effects to examining the complexity of people, technology, and emotions in the new social context. In class, he will talk more about philosophy with his students and reflect on technological ethics. In life, after taking care of a relative with Alzheimer's disease, Hu Yong also focuses on thinking about issues such as how people age and how to face death.

But Hu Yong believes that his practices in these two aspects are interrelated. They are all prompting everyone to think: In an era of risks where technology cannot be completely good, what does it mean to be human?

The following is a dialogue between "New Weekly" and Hu Yong.

After the intervention of algorithms, the boundary between public and private is further blurred

"New Weekly": In "The Clamor of Voices: Personal Expression and Public Discussion in the Internet Age" published in 2008, you discussed the changes between the public and private spheres in the Web 2.0 era - the public has become an illusion, and the private boundary has been constantly blurred. In today's era when major platforms are competing with AI algorithms, has this trend been further amplified? What new changes have emerged?

Hu Yong: In 2008, there were no such algorithmic recommendations in the industry as there are now. 18 years later, with the support of algorithms, the boundary between the public and private spheres has been more severely blurred, and the distinction between the two has been further weakened. First of all, the so - called algorithmic integrated recommendation is to first collect data on users' private preferences. Whether it is your private interests, interactions with friends, your browsing history, or even likes and shares, all will become the materials for feeding the algorithm. As a result, individuals' private - domain behaviors are projected into the public or semi - public content stream.

Secondly, in the Web 2.0 era, social media was just in its infancy. As it has grown and strengthened, it has incorporated algorithmic recommendations. More importantly, commercial activities have also entered. Social and commercial activities are superimposed, and the most typical examples are short - video and live - streaming e - commerce. All your behaviors not only serve personal social interactions but also serve platform traffic and advertising sales. Private preferences no longer belong only to you but have also become part of the platform's value production. That is to say, people's attention has become tradable.

(Photo / pexels)

The scary thing is that due to the deepening embedding of technology, we are helpless and sometimes even think that it brings us convenience and get used to it. If you really stop and think for a while, you will find that all your information has been completely made public. For example, at present, the problem of large models infringing on privacy has been exposed, but it's just the beginning.

"New Weekly": The collection of personal data by large models and the catering of algorithms to individual preferences make people more and more "comfortable". Correspondingly, will this lead to opposition between different groups? For example, is there an algorithmic boost behind the intensifying gender opposition on the Internet in recent years?

Hu Yong: The book "The Clamor of Voices" has a subtitle, "Personal Expression and Public Discussion in the Internet Age". Technology is bound to bring changes at both the individual and group levels. The circles formed by different groups will definitely have intense confrontations. Inside the circle, mechanisms such as the "echo - chamber effect" and the "filter bubble" will strengthen the internal homogeneity. In sociological research, this is called the "in - group" and the "out - group". Algorithmic recommendation will lead to the polarization of the in - group and the intensification of the out - group.

(Photo / pexels)

Take platforms like Xiaohongshu, which have a large number of female users, as an example. Many women will use specific tags to create a free and burden - free communication space, whether to avoid the so - called "male gaze" or to build a certain barrier. Thus, the in - group is established. The advantage of the in - group is to share a similar worldview, life order, and sense of intimacy. But the accompanying cost is that it will deepen the prejudice of the out - group. In some cases, someone may even label you as "misandrist". These are two sides of the same coin.

It can be said that in the information cocoon created by algorithms, the phenomenon of "birds of a feather flock together" has become more serious. Currently, you may even find that different groups have different platforms. In places dominated by male users, topics such as "high bride price" and "feminazi" are widely discussed, while in places with a high proportion of female users, topics such as "phoenix man" and "scumbag" can also attract a lot of discussions.

Saying goodbye to the innocent era, some people are "fighting against algorithms"

"New Weekly": From a communication perspective, what are the reasons behind users' addiction to algorithm - recommended content? Can people "fight against algorithms"?

Hu Yong: In our previous research on Douyin users, many people watching short videos are in a state of "immersive gaze". On the one hand, this is the result of technological design. For example, each video is very short, often with some eye - catching highlights, and users need to manually switch to the next video after it finishes playing; otherwise, it will loop infinitely. On the other hand, this immersive effect is also related to users' usage habits. The interviewees watched Douyin not because they "wanted to do something", but because they "didn't want to do something" or "didn't know what to do". This is a kind of "active passivity" viewing situation. In this situation, the ubiquitous audio - visual stimuli will make people feel that they are exposed to new things, and then they will keep swiping.

(Photo / pexels)

Therefore, the core of "fighting against algorithms" is to break out of this situation and turn the algorithmic recommendation into a tool again, regarding it as something that only meets your main needs. I'm not advocating rejecting technology. I just want to transform individuals' access to information and their ability to control algorithms. I understand it as the "anti - embedding of technology". Even if we can't "defeat" the platform because its influence on us is overwhelming, it doesn't mean that people are completely helpless in the face of the platform.

"New Weekly": Regarding "fighting against algorithms", I thought that when I saw the content in my mother's short - video feed that deceived the elderly, I would click "dislike" for her. Some people will also actively choose other categories of content or use various keywords to disrupt the algorithm's "mind". Besides these methods, do you have any other suggestions?

Hu Yong: "Fighting against algorithms" is first reflected in behavior. The examples you mentioned are all ways to weaken the algorithm's dominance through specific actions. Based on my own experience, I'll add some other perspectives. For example, I obtain information through diverse channels. In addition to various paid subscriptions and active reading, I've always been a loyal user of search engines. Although many people think that search engines can be replaced after the emergence of large models, I believe that the information quality provided by search engines is relatively higher.

(Photo / pexels)

The core algorithm of search engines is PageRank. That is to say, if a piece of information ranks high, it's because it has the highest relevance to the keywords I input, and it will provide a clear information source, so I can trace it. In addition, I have a fixed habit of using search engines. I always turn to the 5th page of the search results, so that I can have a more objective understanding of the topic.

But large models are different. They seem to give you a lot of materials and viewpoints, but they may fabricate information and have the so - called "AI hallucinations". For platforms like Xiaohongshu, I also use them to search for information about food, travel, and entertainment. But if I use them for academic purposes, I can't see the information source and can't identify the qualifications of the content publishers. That is to say, the information cost is very high because a large number of people are trying every means to attract your attention.

In addition to obtaining information through specific channels, I also firmly believe that I must go out and participate in offline activities and use various methods to contact different people, that is, what people often call "physical presence" and the pursuit of a "feeling of real people". These activities can remove the algorithm from your cognitive activities to some extent and establish new experience paths. You must have a "non - computational space" in your life to regain your cognitive sovereignty - this is also the essence of "fighting against algorithms" at the cognitive level. In the words of Brecht's theatrical theory, it is to establish the "estrangement effect" of people from the recommended results.

Busy urban people use weekends to get close to nature, soak up the sun, and relax. (Photo / Zhong)

Of course, beyond the behavioral and cognitive levels, there may be a higher dimension. For example, people with high algorithmic literacy will use professional tools to reduce the traceability of data, block certain things, or restore some past things.

"New Weekly": Humanity has entered the "post - truth" era. How should we re - establish the principles of information judgment and the imagination of digitalization?

Hu Yong: In the "post - truth" era, artificial intelligence can produce false information 24 hours a day without end, and it's very difficult to distinguish. Instead of thinking about how to figure out the truth or falsehood of information, it's better to change the principle. There is