An accessory that enables iPhones to use AI has attracted 13 million views, but I think it's really unnecessary.
If you were to compile a list of the most bewildering tech products today, the AI hardware influencers would definitely make the cut.
From the $699 Humane Ai Pin to the $200 Rabbit R1, these AI startups are all peddling the same beautiful lie: you need specialized hardware to experience true AI.
Today, there's a new addition to this list - the AI Key.
This is an external hardware positioned as an "AI assistant." The manufacturer claims it's plug-and-play. Once connected via the iPhone's USB port, it can help you operate apps and functions on your phone just by speaking. It covers almost all common tasks, from messaging and maps to taking photos and using social apps.
Similar to a phone agent, its usage isn't complicated.
Just state your needs, confirm the app it will use, and then the device will automatically simulate tapping, swiping, and inputting operations. Users can either watch it execute or leave it to handle everything and stop or modify the process at any time when needed.
It's worth noting that although Apple provides system-level automation tools like "Shortcuts," third-party apps can't deeply invoke or control the behavior of other apps. This explains why there are external hardware products in the market trying to fill this gap.
With a compact design, it comes in three color options: Midnight Black, Eggnog White, and Davy's Pink. It's priced at $89, and the manufacturer promises global delivery before Christmas. Founder Adam Cohen Hillel said on the X platform that the first batch of products was almost sold out in just 7 hours.
The popularity of the AI Key isn't surprising. However, questions arise. Do you really need to plug in an extra peripheral to turn your phone into an "AI phone"? Based on this, we can also pose a deeper question - do we really need to create specialized AI hardware for the sake of AI?
The Cut - Throat Competition in AI Hardware, but the Smartphone Remains the King
Although Apple's Senior Vice - President Eddy Cue has repeatedly emphasized that the iPhone might be phased out in the next decade, it's undeniable that the best AI hardware form at present is still the smartphone.
The iPhone 16 Pro in your pocket is equipped with a second - generation 3 - nanometer - process chip and can access the most powerful AI models in human history, such as ChatGPT, Claude, Gemini, and DeepSeek. Even without the support of cloud - based large models, phone manufacturers are making great efforts on edge - side models.
Even Apple, which has been criticized for falling behind in the AI race, recently released FastVLM and MobileCLIP2 on Hugging Face.
These models are 85 times faster than previous versions and 3.4 times smaller in size, making real - time visual language model (VLM) applications possible. They can even run completely locally in the browser to generate real - time video subtitles.
Not to mention the recently released Google Pixel 10 series, which is like getting a phone for free when you buy AI. It can run the Gemini Nano model locally and is equipped with features like Camera Coach and Auto Best Take, which can analyze shooting scenes, lighting, and human movements in real - time, automatically optimize photos, and even give shooting suggestions.
The reason is simple: the hardware quality speaks for itself.
If we look back at the first wave of emerging AI hardware, the Humane Ai Pin and Rabbit R1 both emerged as disruptors, trying to reshape the future of personal computing with the concept of "screenless/less - screen."
The former, the Ai Pin, was created by former Apple executives, with the mission of "making technology your servant, not your master." The latter, the R1, with its bold orange design and the concept of a "Large Action Model," promises to complete complex in - app tasks for users through AI.
However, after the official delivery of the Humane Ai Pin, it was found to have serious overheating and battery life issues and was eventually sold to HP.
The Rabbit R1 also had a high - profile start but a disappointing end. In the early days of delivery, the R1's USB - C port was very picky about charging cables and could only be compatible with some cables. Moreover, its battery life was extremely short, with a battery capacity of only 1000 mAh. This not only reflects quality problems but also the lack of experience of the startup in the supply chain and quality control.
A comment on the Reddit forum even mentioned that the R1 team "accidentally ordered the wrong memory components." This small incident vividly reveals the chaos and vulnerability that hardware startups face in supply chain management.
When startups have to make compromises in cost control and supply chain procurement, it can easily lead to a vicious cycle: low - end hardware can't provide a smooth experience, negative user reviews lead to a damaged reputation, which in turn affects sales. This makes it difficult for the company to reduce costs through large - scale production, ultimately leading to financial difficulties and even bankruptcy.
It's not to say that products like the R1 and Ai Pin have no value. Instead, their significance may lie more in the exploration of AI interaction models.
The concepts they advocate - active agents, ambient voice commands, and unified task interfaces - are powerful. However, these concepts won't thrive in an independent device but will be absorbed by dominant platforms and become native capabilities of systems like iOS and Android.
The roller on the Rabbit R1, the projection on the Humane Ai Pin, and the "key" shape of the AI Key. These seemingly strange design choices are actually testing different interaction hypotheses and eliminating wrong options for the industry.
I'm not against innovation, nor do I think all AI hardware startups are meaningless. But we need to face a reality honestly: in today's era when phones are already so powerful, any product that tries to "enhance" a phone's AI capabilities through external accessories faces huge challenges in user experience.
The real opportunities may lie elsewhere: either do something that phones can never achieve or wait for a brand - new computing platform to emerge. Instead of plugging a plugin into an iPhone and telling users: look, this is the future.
Function or Attribute? Our Fundamental Misunderstanding of AI
A technological philosophy hidden behind the debate on AI hardware is whether you regard AI as a "function" or an "attribute."
A function is discrete and divisible, requiring a specialized carrier. An attribute is permeable and ubiquitous, changing the way the entire system operates.
When AI is seen as a function, the idea is to single out AI and package it as a selling point.
For example, chatbots, translators, or the LAM on the Rabbit R1 and the laser projection on the Humane Ai Pin. Their logic is: first, there's AI technology, and then find a hardware to put it in. When users want to use it, they have to specifically open and interact with it.
The problem is that most of these "functions" already exist in phones and are even better. So these new hardware products seem like "middlemen" and lack real competitive advantages.
Another approach is to integrate AI into the existing ecosystem and make it an inherent "attribute" of the system.
Apple's Apple Intelligence is an example: priority notifications, email summaries, photo cleaning, and enhanced Siri are all AI - powered improvements within the existing experience. Google's decision to deploy Gemini Nano to local devices follows a similar logic.
Users may not even notice the presence of AI, but their efficiency and experience are comprehensively improved.
When cars were first invented, they were naturally called "horseless carriages." People's imagination was limited to replacing horses, and their focus was on whether it could run as fast as a horse and whether it would startle the cattle by the roadside.
No one could foresee that this "iron monster" would give rise to highway networks, modern logistics, suburban culture, and completely change the shape of cities and people's living radii. Today, our imagination of AI hardware may also be trapped in a narrow framework.
So, when the "new species" of artificial intelligence appears, our first reaction is almost a conditioned reflex: it also needs a "specialized device." An "AI box," an "AI terminal," or at least, an "AI PC."
This idea is like a primitive person seeing fire for the first time. Instead of thinking about how to use the energy of fire for cooking, heating, or smelting, they're thinking about making a "fire stick" to hold the flame.
As early as 1998, Eli Zelkha and his team proposed the concept of "Ambient Intelligence," referring to an intelligent environmental system that can "perceive human presence and respond." These environments achieve seamless interaction with users through embedded devices (such as sensors, actuators, and AI modules), trying to integrate technology into life rather than being a burden.
Companies that truly understand AI won't try to deliberately create "AI devices" but will make all devices AI - enabled.
A mature technology doesn't need to be constantly mentioned by name. Just as we usually don't deliberately say "electric lamps" or "internet computers" today because electricity and network connections are already the underlying capabilities of these devices and are taken for granted.
Similarly, when AI is truly popularized, it will transform from a repeatedly emphasized "selling point" to the infrastructure of all smart devices.
By then, your car, refrigerator, glasses, and even clothes will have different forms of intelligence. They'll be connected to each other, operate collaboratively, and jointly form your personal "Ambient Intelligence" system.
And the day when we stop talking about AI hardware will be the beginning of AI being everywhere.
This article is from the WeChat official account "APPSO", author: Discovering Tomorrow's Products. Republished by 36Kr with permission.