It's 2026. Is this AI desktop pet with only 300,000 pixels still in high demand?
Back in 2024, the two most talked-about product launches in the tech circle were from Humane and Rabbit. One created an AI badge that can be worn on the chest, while the other made a small AI cube that can fit in your pocket. The products of these two companies once sparked a craze and imagination: the highlight moment of AI hardware was coming.
However, the situation quickly took a turn for the worse. The Humane AI Pin is priced at $699, along with a monthly subscription fee of $24. It has no screen and projects information onto your palm using a laser. A WIRED review used the word "catastrophic" to describe it.
The Rabbit R1 is much cheaper, costing $199. It has a 2.88-inch screen and features a "large action model" to help you operate apps on your phone. However, reviewers found that they still had to take out their phones to finish tasks halfway through using the R1.
Both products made the same mistake: they tried to replace the phone. However, the phone is not just a device; it is a container for your entire digital life. Your WeChat chat history, bank apps, and the entrances to food delivery, ride-hailing, and mobile payment services are all in it.
The app ecosystem built over the past decade, combined with user preferences and password memory data, forms an extremely complete and personalized operating system. The cost of migrating to another mobile device is prohibitively high. Humane and Rabbit ask you to carry an additional device, but if this device cannot hail a cab or make a payment, you will ultimately have to take out your phone. Once you take out your phone, the other device becomes a burden.
Humane was later surrounded by acquisition rumors, and the popularity of the Rabbit R1 quickly faded. The first wave of AI hardware in 2024 almost ended in failure. Simply replicating what the phone can do with a smaller screen and slower response speed will not attract customers.
Not replacing the phone, but staying beside it
However, the story of AI peripherals is far from over. From 2025 to 2026, a new batch of products emerged. In this new wave, these products made a crucial directional adjustment: instead of trying to replace the device in your pocket, they aimed to become a new presence on your desktop.
LOOI is the most ingenious product in this shift. It launched a crowdfunding campaign on Kickstarter in early 2025, with 3,578 supporters and raised over $510,000. LOOI doesn't have its own screen. Its design concept is simple and straightforward - it magnetically attaches your phone to it.
In this way, the phone's screen becomes its face, the phone's camera becomes its eyes, and the phone's ChatGPT becomes its brain. LOOI only provides the "body," which is a mechanical base that can nod, shake its head, twist, and make bionic expressions.
It is not meant to be carried in your pocket but placed on the table. It doesn't aim to replace the phone or offer comprehensive functions but focuses on providing an immediate sense of interaction. It costs $189.
Although it may sound like an advanced toy or a robot shell for your phone, LOOI got one thing right: it didn't try to be a complete AI device but acknowledged that the phone is the computing center and it only focuses on "giving AI a physical presence."
The AI in your phone can already listen, speak, and see. LOOI adds the ability to "move." Humans are more instinctively sensitive to physical movements than to text on a screen. A simple nod or tilt of the head is enough to create the feeling of "interaction."
Razer took a different approach in terms of form. At CES 2025, Razer presented the prototype of Project AVA, positioning it as an e-sports AI coach. At CES 2026 a year later, AVA evolved into a general AI desktop companion. By GDC in March 2026, AVA added agentic capabilities, not only responding to your commands but also proactively planning multi-step tasks.
What makes AVA stand out is its hardware form: a 5.5-inch 3D holographic projection display that allows you to see a three-dimensional virtual character standing on your table without VR glasses. It is equipped with dual far-field microphones, an HD camera, and an ambient light sensor, which can track your eyes, read your expressions, and even see the content on your computer screen through the PC Vision mode.
You can choose different virtual avatars: the efficient AVA, the game-oriented KIRA, the strategic ZANE, and even avatars in collaboration with e-sports player Faker and characters from the SAO anime.
AVA is currently accepting pre-orders with a refundable deposit of $20. It is expected to be shipped in the second half of 2026, and the price has not been announced yet. However, considering Razer's positioning, it is likely to be expensive.
If LOOI's concept is to "give the phone a body," Razer's concept is to "give AI an image." The purpose of the holographic projection is not to display information but to make you feel that there is a real character standing on your table. It also benefits from the popularity of anime characters, as users are more likely to accept an anime character they are already familiar with.
Officially, users often describe the feeling as "it's in the room" rather than "it's behind the screen." The 3D depth simulated by the curved screen and the real-time eye tracking to adjust the character's perspective make the virtual avatar of Ami seem to be following your gaze. This makes the latest technology serve the oldest need: having something beside you.
A Mashable report on CES described it as "The AI soulmate for the lonely remote worker," which means it focuses on providing emotional value.
A different problem-solving approach
Last month, the "culmination" of this approach emerged: StackChan.
Compared with the above three products, StackChan's specifications are much more "modest": a 0.3MP camera, a 2-inch screen, a 550mAh battery, and an ESP32 - S3 chip. There is no holographic projection, no curved OLED, and no eye tracking. It starts at $59 and weighs 187 grams, small enough to fit in your palm.
It's hard to believe that in 2026, a "robot" product is equipped with a 0.3MP camera.
However, StackChan did something that other products didn't. It completely opened itself up: the firmware, hardware interfaces, and development tools are all open - source. In theory, you can write code with Arduino to make it do anything, connect to any AI model, and develop in any language.
The official factory firmware already includes AI dialogue, expression animation, ESP - NOW remote control, video calls via the mobile app, and online app downloads, but these are just the starting point. The Kickstarter page of StackChan says:
In an era filled with closed, concept - driven “AI robot” products, StackChan stands out with its open - source core.
StackChan also has a different history. Initially, it was not a product plan of a company but a personal open - source project by Japanese developer Shinya Ishikawa.
The community has been involved for several years. Some people created DIY kits, some added AI capabilities, and some designed different shells. M5Stack finally productized it but retained the open - source and co - creation genes. On Kickstarter, 4,142 people supported it and raised about HK$3.6 million - the initial goal was only HK$78,000, exceeding it by 45 times.
This figure shows that people don't just want to buy an AI desktop robot; they want to buy an AI desktop robot that they can modify themselves.
StackChan's camera resolution is only 0.3MP, but this is by design. It's not that it can't achieve high - definition, but that high - definition is not necessary. Low resolution means that the local ML model can process video at a usable frame rate.
In addition, its three Grove interfaces and LEGO - compatible holes mean that you can connect sensors, peripherals, and build modular structures. Maker communities in Japan and around the world have been making various modifications: some make it track faces and turn its head, some use it as a smart home control center, and some put their 3D - printed shells on it. A user on Reddit said that what impressed him was not StackChan itself but the "co - creation model."
Shells made by netizens through 3D printing
When comparing StackChan with Razer AVA's holographic projection and Lepro Ami's 8 - inch curved screen, it forms an interesting contrast: the former sells technology and immersion, while StackChan sells possibilities. Buying AVA gets you a well - designed AI partner, while buying StackChan gets you a canvas with infinite possibilities.
A growing track
Google said something memorable in an interview with The Verge: "The future of AI hardware isn't one