It's reported that OpenAI's first hardware product is named "Dime". The bad news is that the cost is too high, and only a "crippled version" will be launched in September.
OpenAI's hardware products are really coming, but they might be a "stripped - down version".
According to the latest revelation from Smart Pikachu, the first consumer - oriented AI earphone from OpenAI will be named "Dime" (the 10 - cent coin), probably to describe its extreme compactness and delicacy.
Meanwhile, the patent application for its related hardware products was officially announced by the National Intellectual Property Administration (CNIPA) yesterday, which means we'll soon see what this device looks like.
However, OpenAI has to make compromises in its product strategy. Limited by the shortage of HBM, the cost of 2nm chips is too high. The original plan for an all - around "phone - like" form with a computing unit has been postponed.
The current plan is to launch a simplified earphone with only audio functions in 2026. An advanced version with better configuration will be released after the component costs decrease.
"Dime" is the previously exposed OpenAI audio device project, codenamed "Sweetpea". This is by no means an ordinary earphone. According to previous supply - chain information, the hardware design of this device has been described as "unique and unprecedented":
- It aims to replace AirPods. Instead of using bone conduction, it uses materials closer to those of mobile phones.
- The main processor targets a 2nm smartphone - level chip (the Exynos solution is the most popular).
- The main body is made of metal and shaped like a pebble. There are two removable capsule - shaped earpieces inside. It has a unique wearing style, being placed behind the ear instead of the traditional in - ear style.
- A custom chip is being developed with the goal of enabling the device to directly execute Siri commands on the iPhone through voice, breaking through the ecological barriers.
It is reported that internally, this device has been listed as the top priority by the Jony Ive team. OpenAI has high hopes for it. According to the plan, the product will be released around September, and the target for the first - year shipment is as high as 40 to 50 million units.
Moreover, Foxconn has been notified to prepare the production capacity for five OpenAI devices by the fourth quarter of 2028.
Why does OpenAI, a company focusing on large - scale models and AI software, insist on developing such high - cost hardware?
OpenAI CEO Sam Altman once said bluntly at a lunch in New York: Stop staring at Google. OpenAI's real rival is Apple. In his view, the future battlefield of AI is not in the cloud but at the terminal.
Altman believes that current smartphones can't truly provide an AI companion experience - the screens are too small, the interaction methods are too limited, and the privacy - protection mechanisms are too rigid. Whoever can create the "AI - native device" first will gain the upper hand in the next decade.
"Smartphones are like Times Square, bombarding you with information and shattering your attention. What OpenAI wants to create is a 'lakeside cottage' - a place where you can close the door and block out the noise when you need to focus," Altman described his vision for the hardware.
Under this vision, besides the highly - anticipated "Sweetpea" earphone, a mysterious smart pen is also under serious consideration.
While tech giants are rushing to integrate AI into glasses and watches, OpenAI, whose market share of ChatGPT has been eroded in the past year, has started a new battle for entry points.
Did OpenAI spend $6.5 billion just to make a pen?
Although the supply - chain revelations are clear, the idea of an "AI pen" still seems hard to believe - until we connected the clues revealed by Altman and Ive in the past. We found that this seemingly strange conjecture actually has some basis.
Last May, OpenAI spent $6.5 billion to acquire the hardware company io founded by Jony Ive. Later, it was forced to divest the brand due to a trademark dispute (being sued by the audio company iyO).
Evans Hankey, the co - founder and chief product officer of io, clearly stated in a court statement: "io currently has no plan to launch a custom earphone." Another co - founder, Tang Tan, directly distanced himself: The io product prototype is neither an in - ear device nor a wearable device.
The picture is from Tang Tan's court petition.
The combination of these two statements basically rules out the possibility that OpenAI's first AI hardware will enter the mature markets of glasses, watches, and earphones.
Altman himself has given many hints about this device: It is small enough to fit in your pocket or be placed on the table and has environmental sensing capabilities.
Most importantly, it is not here to replace your phone or computer but to fill the scenarios where it's inconvenient to take out your phone or where you need to focus deeply.
He once described it like this: Smartphones are like Times Square, bombarding you with information and shattering your attention. What OpenAI wants to create is a "lakeside cottage" - a place where you can close the door and block out the noise when you need to focus.
From this perspective, a pen is indeed a smart choice. Compared with the 24/7 online AI pendant like Friend, a pen has a lower cognitive threshold. It doesn't look out of place on the table and invades your privacy much less than wearable devices.
Friend AI pendant
In terms of design, Ive once said that he prefers products that are extremely complex and intelligent on the inside but are so appealing on the outside that people want to touch and use them casually. He even joked that the ultimate standard for a successful design is "making people want to lick and bite it."
Altman later confirmed this: The appearance of the prototype really made him want to "lick it." He also accurately described its look - minimalist, elegant, with a bit of playfulness and humor.
Ive also revealed that this AI hardware tends to use high - quality materials like ceramics, with the core being the pursuit of "almost childlike simplicity." It can be inferred that the device's interaction will be extremely simplified, probably only retaining a few physical buttons.
Besides the high compatibility in product concept and form, the personal preferences of Jony Ive and Sam Altman for "pens" add more credibility to this conjecture.
Yes, Jony Ive is a senior pen collector. His collection includes vintage Montegrappa fountain pens and Hermès fountain pens designed by Marc Newson.
Early in his career, he earned his first pot of gold with a sporty TX2 ballpoint pen. Later, he was deeply involved in the design of the Apple Pencil, accumulating rich experience in pen - shape design.
TX2 designed by Jony Ive
Sam Altman is of the same kind, or even more so. In the "How I Write" podcast in September 2024, Altman revealed that he is a "super note - taker." He can finish a notebook in an average of two or three weeks. He also particularly recommended two pens: Uni - Ball Micro 0.5 and the Muji 0.36/0.37 models, which work best with dark - blue ink.
In his own words, "This kind of notebook paired with one of these pens is the ideal writing combination."
As early as April 2018, he wrote in his blog about the benefits of using pen and paper to record ideas: "I prefer lists written on paper. It's easy to add or delete tasks. I can also check them at any time during meetings without being impolite."
It seems quite reasonable for two people with a pen obsession to come together and create an AI pen.
The audio model is advancing rapidly. OpenAI is preparing a major move in AI hardware
Two pen lovers making a pen obviously won't just create a batch of ordinary pens.
According to the revelation from foreign media The Information, OpenAI is accelerating the iteration of its audio AI model. The core goal is to build a solid technical foundation for this AI personal device.
People familiar with the matter revealed that voice interaction will be the core scenario for the device.
In the past two months, OpenAI has completed the integration of multiple teams including engineering, product, and research to optimize the audio model. The new - generation model architecture has shown initial results: It can generate more natural and human - like voice responses, and the accuracy and depth of the content have also been significantly improved.
More importantly, this model will support synchronous conversations with users and can handle interruptions smoothly. It is expected to be officially released in the first quarter of this year.
The report said that from the plan shown internally last summer, OpenAI's first AI hardware is positioned as a "smart partner" - not just a simple software interface but a device that can actively collaborate with users, offer suggestions, and help users achieve their goals.
With user authorization, it can also collect audio and video to perceive the user and the surrounding environment, further improving the accuracy of interaction. Currently, OpenAI has formed a cross - domain team covering the supply chain, industrial design, and model research, which shows its ambition in the hardware market.
The core team for this audio AI project has also been finalized: The person in charge, Kundan Kumar, switched to OpenAI from Character.AI last summer; Ben Newhouse, the product research director, is leading the audio adaptation of the text - technology architecture; Jackie Shannon, the product manager of the multimodal ChatGPT, is responsible for optimizing the interaction experience. The three of them each have their own responsibilities and form the pillar of the project.
However, there is a core obstacle in front of OpenAI: Most ChatGPT users haven't developed the habit of voice interaction. The reason is that the existing voice - model experience is poor, and users have little awareness of these functions. Therefore, the report said bluntly that OpenAI's top priority should be to teach users to "talk to AI with their voices."
Once this device is launched and has environmental sensing and online listening capabilities, it will surely disrupt the current pattern of the AI hardware market - AI recording hardware may face its strongest competitor.
Most existing AI recording hardware only offers functions like voice - to - text conversion and summarization. If OpenAI's device is launched, recording summarization will only be one of its many native skills, not all of them.
Just as smartphones made MP3 players obsolete, when a multi - scenario device incorporates all your functions, the survival space for vertical single - function products will be completely squeezed.
Meanwhile, following the common practice of "hardware + subscription", OpenAI will probably package the software services directly into the ChatGPT subscription system. With its large user base and extremely low marginal cost, it can quickly capture the market.
It's worth mentioning that, considering OpenAI's technological layout and the form of a pen, Max Child, the founder of the San Francisco startup Volley, proposed an imaginative conjecture last year:
The top of this AI pen may integrate a micro - projector to project images onto the desktop, thus solving the core pain point of screen - less interaction. The pen clip may integrate a microphone or even a camera, which can not only parse text but also perceive a wider environment.
This means that when users write on any paper, the AI can not only digitize the handwriting but also interpret the content in real - time: When writing a mathematical formula, it can directly give the answer; when writing meeting minutes, it can automatically generate to - do lists and sync them to the phone.
It may even become an intelligent center: controlling surrounding digital interfaces or serving as an advanced input terminal for tablets, injecting the capabilities of ChatGPT directly into the creation at the tip of the pen.