HomeArticle

The video "Rabbit Trampoline" has been played over 500 million times. This most popular AI video across the entire internet is the result of humans' love for being "deceived".

爱范儿2025-08-04 15:37
The rabbit is not on the trampoline, but the person is in the short video.

A video of "rabbits on a trampoline" that looks like it was captured by night - vision surveillance has gone viral on TikTok, garnering 500 million views across the internet.

The video seems to have been captured by a home security camera. The lighting is dim and the picture is blurry, but it just right captures several rabbits taking turns to jump, as if they are putting on a night show.

The title of the video reads: "Just checked the home surveillance. I think we've got some special guests in our backyard! @Ring".

The blurry quality of the surveillance video and the seemingly partying rabbits in it quickly caught people's attention with its cute and somewhat realistic look.

A screenshot of the video data, which received 25 million likes on TikTok

@Greg, a celebrity with millions of followers on the social media platform X, also commented, "I never realized I needed a bunch of trampolining rabbits until today."

However, this cuteness is fake. The rabbits in the video don't actually exist. Someone discovered that it was AI - generated.

Between the 5th and 6th seconds, the rabbit in the top - left corner suddenly "disappears". Looking back, the details do seem a bit odd.

But unlike most "AI blooper" videos, hardly anyone recognized it as fake at first glance. Even young people with plenty of video - watching experience exclaimed, "Oh no, I've actually been deceived."

But this is not a scam. It's more like a small - scale social media disaster: It's not 'we were deceived', but 'we were actually willing to be deceived'.

It seems just blurry enough, but actually 'deceives' just right

The reason this AI video managed to "deceive" the public is not largely because AI video - generation technology has reached perfection, but because it "deceives just right".

It precisely exploits our inherent impression of surveillance videos and hits all the traffic - attracting points that can make us let our guard down.

The blurry night - vision quality and static background just cover up the weaknesses of AI

We are used to thinking that night surveillance videos are blurry, dark, and full of noise. This preconceived impression perfectly covers up the technical flaws of AI videos, such as problems with action continuity, shadow details, and background dynamics that are prone to giving away the fact that it's AI - generated.

So when it appears as a "night surveillance video", the low - resolution and blurry quality actually becomes a distraction, covering up the lack of realism.

In addition, although some AI video - generation models are quite good at handling foreground subjects, the rendering of the background often looks very surreal.

And the background of this video is static, which helps AI avoid another technical difficulty.

The caption with '@Ring' enhances the credibility of the source

The video publisher cleverly tagged the home security camera brand "Ring" in the title, which immediately made the source of the video seem well - founded and more believable.

Ring is a smart security company

This small detail creates the illusion that "this video was captured by someone's doorbell camera", making people automatically classify it as a "life record" rather than "creative content".

'Animals causing trouble at night' is a meme that internet users accept by default

Countless viral videos have trained us to believe that this kind of scenario is real. Cats stealing instant noodles at night, raccoons breaking into swimming pools at night, and coyotes playing on trampolines. Animals always like to "break the rules" when humans are not around. Rabbits on a trampoline seems quite reasonable.

Most importantly: It's so cute! Who would question such a heart - warming scene? When the content is sweet and light enough, it's easy for us to "choose to believe".

Although the sudden disappearance of the rabbit in the top - left corner in the middle of the video reveals its AI - generated nature, for most short - video viewers who scroll quickly, this momentary flaw is easily overlooked.

While the rabbit video was causing a stir, Elon Musk also shared the amazing progress of AI video technology.

Ten days ago, it took 60 seconds to render a 6 - second video, then it dropped to 45 seconds, then 30 seconds, and now it has been shortened to 15 seconds.

We may be able to control the time within 12 seconds this week.

He also said that real - time video rendering technology is expected to be achieved within 3 to 6 months.

A screenshot of Elon Musk's post on X

This means that the blooper scenes like the "disappearing rabbit" that we can still see today may be almost impossible to detect in a few months.

When AI videos are technically flawless, discussing "how to distinguish between real and fake" loses its meaning.

This also forces us to shift our focus from the technology itself to more core issues.

What makes us be deceived and have a frenzy is not actually AI

After the truth of the video was revealed, many users expressed a feeling of "shattered beliefs".

A TikTok user said, "This is the first AI video I believed was real. I'm doomed when I'm old." Another user said, "Now I think I'll be that kind of old person who gets deceived in the future."

This emotional shift from confidence to panic has become a new online hot topic.

However, simply blaming the problem on "AI developing too fast" or "us being too easy to deceive" may overlook deeper reasons. The core of this incident may not lie in AI technology itself, but actually in the way social media platforms operate.

By looking through the video's comment records, we found that people's reactions in the comment section almost follow the same psychological script.

A screenshot of some comments on TikTok

First, "Oh my god, this is so cute!"

Then, "Wait, something seems off?"

Third, "Was I deceived? Oh no, am I going to be that kind of old person who gets deceived?"

Finally, it still comes back to, "But... I don't blame it."

We are establishing a new kind of "interaction logic" with AI videos.

We don't completely believe it, but we default that it might be fake. But we are still willing to stop and watch, give it a like, and forward it to friends to guess, like a game.

The recommendation system of short - video platforms

And the platform's algorithm understands this psychological structure very well.

In this process, "whether the AI video is real or fake" is no longer the key. It's more like a threshold for participation: Did you understand it? Can you tell the difference? Were you deceived?

In the past two years since the explosion of AI, we've always sighed that AI - generated videos and pictures can be so realistic that we're panicking, worried that we'll be more easily deceived by false information.

However, the viral spread of the "rabbits on a trampoline" video is not entirely due to the "deceptiveness" of AI technology, but due to the human audience's deep - seated need to "be deceived".

Not all of these netizens were passively deceived. Many of them actively and tacitly participated in a collective game called "pretending to believe".

The protagonist of this frenzy is not AI, but humans themselves.

It was the "fleeting" disappearing - rabbit bug in the video that upgraded the whole event into an internet - wide "spot - the - difference game". If the video were perfectly seamless, it might just be quickly buried by the next video.

The movie "The Prestige"

It's like how the audience knows that the magician is "deceiving" them, but what they enjoy is precisely the cognitive challenge of "knowing it's fake but not being able to spot the flaw".

The "blooper" of the AI rabbit is the moment when the magic is exposed, which makes everyone join the discussion and thus triggers the spread.

Flaws create controversy, and controversy drives participation. The authenticity of the video is no longer important. The chaos and discussion it triggers are the guarantee of traffic.

This self - deprecating "I've actually been deceived" quickly shortens the psychological distance between strangers and forms a sense of community identity of "we're all easy - to - deceive fools". The social value generated by "being deceived together" is far greater than the authenticity of the video content itself.

Ideally, we should learn to consciously enjoy the fun brought by this "fake content" while maintaining a clear - headed awareness, but this may not be easy for most people.

The potential danger doesn't only lie in the realism of AI, but in when this "collective deception" is used maliciously, such as creating rumors or scams. What we need to establish is the recognition of the "intention" of information, rather than just the judgment of "authenticity".

We can ask ourselves more often: What kind of feeling does this content want me to have? What does it ultimately want me to do?

This article is from the WeChat official account "APPSO". The author is APPSO, which discovers tomorrow's products. It is published by 36Kr with permission.