HomeArticle

Why has the "AI desk pet" with only 300,000 pixels sold like hotcakes?

爱范儿2026-05-12 18:02
Does AI have to be another operating system? Why can't it be a new way of "existence"?

Back in 2024, the two most talked - about product launches in the tech circle were those of Humane and Rabbit. One created an AI badge that can be worn on the chest, and the other made a small AI cube that can be carried in the pocket. The products of these two companies once sparked a frenzy and imagination: the highlight moment of AI hardware was coming.

However, the situation took a turn for the worse very quickly. The Humane AI Pin is priced at $699, plus a monthly subscription fee of $24. It has no screen and projects information onto the palm using a laser. A WIRED review used the word "catastrophic".

The Rabbit R1 is much cheaper, costing $199. It has a 2.88 - inch small screen and features a "large action model" to help you operate apps on your phone. But reviewers found that they still had to take out their phones to finish tasks when using the R1.

Both products made the same mistake: they tried to replace the mobile phone. However, the mobile phone is not just a device; it is the container of your entire digital life. Your WeChat chat records, bank apps, and the entrances for ordering takeaways, hailing taxis, and scanning for payment are all in it.

The app ecosystem built over more than a decade, combined with user preferences and password - related data, forms an extremely complete and personalized operating system. The cost of migrating to another mobile device is prohibitively high. Humane and Rabbit ask you to carry an additional device, but if this device can't hail a taxi or make a payment, you'll still have to take out your phone. Once you take out your phone, the other device becomes a burden.

Humane was later surrounded by acquisition rumors, and the popularity of the Rabbit R1 quickly faded. The first wave of AI hardware in 2024 almost ended in failure. People won't pay for a product that replicates what the phone can do with a smaller screen and slower response speed.

Don't Replace the Phone, but Stay Beside It

However, the story of AI peripherals is far from over. From 2025 to 2026, a batch of new products emerged. In this new wave, these products made a crucial directional adjustment: instead of trying to replace the device in your pocket, they aimed to become a brand - new presence on the desktop.

LOOI is the most ingenious product in this shift. It launched a crowdfunding campaign on Kickstarter in early 2025, with 3,578 supporters and raised over $510,000. LOOI doesn't have its own screen. Its design concept is simple and straightforward - it magnetically attaches your phone to it.

In this way, the phone's screen becomes its face, the phone's camera becomes its eyes, and the phone's ChatGPT becomes its brain. LOOI only provides the "body", which is a mechanical base that can nod, shake its head, twist, and make bionic expressions.

It's not meant to be carried in the pocket but placed on the table. It doesn't aim to replace the phone or pursue comprehensive functions but focuses on immediate interaction. It costs $189.

Although it may sound like a high - end toy or a robot shell for the phone, LOOI did one thing right: it didn't try to be a complete AI device but recognized that the phone is the computing center, and its own role is to "give AI a physical presence".

The AI in the phone can already listen, speak, and see. LOOI adds the ability to "move". Human beings are more instinctively sensitive to physical movements than to text on the screen. A simple nod or a tilt of the head is enough to create the feeling of "interaction".

Razer took a different approach in terms of form. At CES 2025, Razer presented the prototype of Project AVA, which was positioned as an e - sports AI coach. At CES 2026, one year later, AVA evolved into a general AI desktop companion. By GDC in March 2026, AVA gained agentic capabilities, not only responding to your commands but also actively planning multi - step tasks.

AVA's most eye - catching feature is its hardware form: a 5.5 - inch 3D holographic projection display. You can see a three - dimensional virtual character standing on your table without wearing VR glasses. It is equipped with dual far - field microphones, an HD camera, and an ambient light sensor, which can track your eyes, read your expressions, and even view the content on your computer screen through the PC Vision mode.

You can choose different virtual avatars: the efficient AVA, the gaming - oriented KIRA, the strategic ZANE, and even avatars co - branded with e - sports player Faker and characters from the SAO anime.

AVA is currently accepting pre - orders with a refundable deposit of $20. It is expected to be shipped in the second half of 2026. The price has not been announced yet, but considering Razer's positioning, it is estimated to be expensive.

If LOOI's concept is to "give the phone a body", Razer's concept is to "give AI an image". The purpose of the holographic projection is not to display information but to make you feel that there is really a character standing on your table. It also benefits from some anime - themed avatars, as users are more likely to accept an anime character they are already familiar with.

Officially, users often describe the feeling as "it's in the room" rather than "it's behind the screen". The 3D depth simulated by the curved screen, combined with real - time eye - tracking to adjust the character's perspective, makes the virtual avatar of AVA seem to follow your gaze. This makes the latest technology serve the oldest need: having something beside you.

A Mashable CES report positioned it as "The AI soulmate for the lonely remote worker", which focuses on emotional value.

A Different Problem - Solving Approach

Last month, the "ultimate" product of this route emerged: StackChan.

Compared with the above three products, StackChan's specification sheet is rather "shabby": a 0.3MP camera, a 2 - inch screen, a 550mAh battery, and an ESP32 - S3 chip. There is no holographic projection, no curved OLED, and no eye - tracking. It starts at $59 and weighs 187 grams, small enough to fit in your palm.

In 2026, it's quite audacious for a "robot" product to be equipped with a 0.3MP camera.

However, StackChan did something that other products didn't. It completely opened itself up: the firmware is open - source, the hardware interface is open - source, and the development tools are open - source. In theory, you can use Arduino to write code to make it do anything, connect to any AI model, and develop in any language.

The official factory firmware already includes AI dialogue, facial animations, ESP - NOW remote control, video calls via the mobile app, and online application downloads, but these are just the starting point. The Kickstarter page of StackChan states:

In an era filled with closed, concept - driven “AI robot” products, StackChan stands out with its open - source core.

StackChan also has a different history. Initially, it was not a product plan of a company but an individual open - source project of Japanese developer Ishikawa Shinya.

The community has been involved for several years. Some people made DIY kits, some added AI capabilities, and some designed different shells. M5Stack finally productized it but retained the open - source and co - creation genes. On Kickstarter, 4,142 people supported it and raised about HK$3.6 million - the initial goal was only HK$78,000, exceeding it by 45 times.

This figure shows that people don't just want to buy an AI desktop robot; they want to buy an AI desktop robot that they can modify themselves.

StackChan's camera has a resolution of only 0.3MP, but this is by design. It's not that high - definition is unattainable, but it's not necessary. Low resolution means that the local ML model can process video at a usable frame rate.

In addition, its three Grove interfaces and LEGO - compatible holes mean that you can attach sensors, connect peripherals, and build modular structures. Maker communities in Japan and around the world have been making various modifications: some make it track faces and turn its head, some use it as a smart home control center, and some put their 3D - printed shells on it. A user on Reddit said that what impressed him was not StackChan itself but the "co - creation model".

Shells made by netizens through 3D printing

If we compare StackChan with Razer AVA's holographic projection and Lepro Ami's 8 - inch curved screen, it forms an interesting contrast: the former sells technology and immersion, while StackChan sells possibilities. Buying AVA gets you a well - designed AI partner, while buying StackChan gets you a canvas with infinite potential.

A Growing Market

Google said something memorable in an interview with The Verge: "The future of AI hardware