After a 12-year hiatus, Amazon re-enters the mobile phone battlefield. Is it going to "reinvent" the mobile phone with AI?
According to Reuters, after a 12 - year hiatus, Amazon is developing a brand - new smartphone with the internal development code name "Transformer". Yes, Amazon's code name directly refers to the Transformer architecture at the core of today's large AI models. From Amazon's current vision, you can even call it in advance:
Alexa Phone.
Just like the Gemini phone and Doubao phone, AI is the core of Amazon's new smartphone. According to insiders, the new phone will revolve around the newly upgraded Alexa (Amazon's AI assistant).
Image source: Amazon
But this is not Amazon's first attempt at making a smartphone. In 2014, Amazon launched the Fire Phone, which focused on shopping, content distribution, and its own services. However, there were obvious gaps in the app ecosystem, system experience, and usage habits compared with the iPhone and Android systems. The result was straightforward: this product quickly exited the market and became a frequently - mentioned failed case.
Now, Amazon is back. More importantly, this change is not just Amazon's "obsession".
In the past two years, there have been significantly more attempts around the "next - generation personal device", and the players come from completely different backgrounds. We can see the new attempt of the early Rabbit R1, the input - output transformation of the iKKO Mind One (square phone) and Clicks Communicator (keyboard phone), the "intelligent agentization" brought by the Doubao phone assistant, and the experience of the next - generation personal intelligent device of the Leqi AI glasses.
Image source: Doubao
These changes may seem different, but they all occurred after the wave of large AI models. It's not hard to understand. When users can complete operations that originally required switching between multiple apps with a single sentence, and when devices start to have the ability to understand intentions and execute tasks actively, the original usage path starting with "opening an app" is no longer the only option.
The entry point shifts from apps to intentions, and the interaction changes from clicks to conversations. This change is not very obvious, but it is enough to shake the usage habits established by smartphones in the past decade.
The ultimate form of an AI phone won't be defined by the iPhone
Looking back at the failure of the Fire Phone today, it's not just a product problem but also a matter of the times.
In that stage when apps and touchscreens were the absolute center, every user operation was based on the path of "opening an app - entering the interface - completing a click". Amazon tried to bypass the app ecosystem and embed e - commerce and services directly into the system. However, it neither had the ability to build a new app ecosystem nor could it bypass this existing path. As a result, it became an "outsider" separated from the mainstream system.
The situation is different today. When we look at the new players in the past two years together, we'll find that they approach the problem in different ways, but almost all of them revolve around the same change:
After AI can understand intentions and perform tasks on behalf of users, does the phone still need to maintain its original form and interaction logic?
Some players choose to start from the "hardware form". For example, iKKO and Clicks didn't try to rebuild the system directly but instead re - disassembled the phone's form.
Left: iKKO Mind One, Right: Clicks Communicator. Image source: iKKO, Clicks
The former created a device similar to a "square phone", weakening the traditional touchscreen experience and making AI the core entry point. The latter did the opposite, strengthening the input ability with a physical keyboard. They seem to be going in completely opposite directions, but in essence, they both assume that the current hardware form of the phone is not the optimal solution for the AI era.
Some players start directly from the operating system level. The Doubao phone assistant launched by ByteDance, the Gemini automation function first launched by Google on the Samsung Galaxy S26, and the new device Amazon is developing are all essentially doing the same thing - making AI the primary entry point of the system rather than a passively - called function.
Image source: Samsung
The radical aspect of this approach is that it's not satisfied with optimizing apps but tries to bypass them, enabling users' needs to be directly converted into execution results.
There is another category that goes even further and doesn't even take the "phone" as a premise. The Rabbit R1 is the most typical representative. It tries to use an independent device to handle users' daily operations, using AI to call various services and replace the app logic in the phone.
However, the early large AI models clearly couldn't support this path. The execution success rate, response speed, and usage cost were not sufficient to support its experience and success.
In contrast, another path seems more restrained and reliable. AI glasses represented by Leqi AI glasses don't try to replace the phone right away. Instead, they turn AI into an "always - available" ability entry point, using voice and vision to obtain information and perform simple tasks. What they reduce is not the functions but the frequency of users picking up the phone, gradually aiming for the "next - generation personal intelligent device" after smartphones.
Experiencing Leqi AI glasses at AWE 2026. Image source: Lei Technology
But no matter which approach is chosen, the underlying premise is the same: when AI can directly handle the "needs" themselves, the past usage method centered around apps is no longer the only answer.
From App to Agent: Rewriting the Entry and Interaction in the AI Era
In the past decade, the success of smartphones has largely relied on a stable logic: apps are the entry points, interfaces are the paths, and users complete operations through a series of clicks and swipes on the GUI (Graphical User Interface). Whether it's the iPhone or Android, they are essentially continuously optimizing this system to make it smoother and more efficient.
But after the emergence of AI, this premise began to waver, and the most direct change is in the "entry point".
Recall that on traditional smartphones, users need to first determine "which app this task belongs to" (order takeout on Meituan, take a taxi on Didi), and then enter the corresponding interface to complete the operation. This process itself is a cost and the foundation of the app system.
Image source: Lei Technology
Just like the "one - sentence taxi - hailing" function newly launched by the Qianwen app, when AI can accurately understand the demand and execute it directly, this intermediate link starts to be compressed or even bypassed. Users no longer need to know where the entry point is; they just need to express their intentions.
This is why, whether Amazon is trying to rebuild the phone around Alexa or ByteDance is promoting AI to the system layer through Doubao, they are essentially shifting the "entry point" from apps to AI itself.
In terms of the approach, products like Rabbit are more radical. They directly externalize this logic into an independent device, allowing AI to call services and operate software, and integrating the capabilities originally scattered in various apps into a unified entry point. Its problem is that its capabilities are not up to par, and it can't support the actual experience. With today's large AI models and Agent technology, it has a better chance of success.
In contrast, AI glasses are promoting the same change in another dimension. They don't try to reconstruct the entire system but instead make "asking questions - getting results" a more natural interaction method through voice and vision. In this process, users don't even need to clearly realize that "I'm using a certain app".
The change in the entry point also brings about a change in interaction.
In the app system, the GUI is the absolute center. All operations need to be presented through the interface and then completed by clicking. This is why the screen size, refresh rate, and touch experience have been continuously enhanced in the past decade. But in the logic of AI Agents, the interaction starts to shift from "seeing the interface" to "expressing needs". Conversation, voice, and even visual perception are gradually replacing some roles of the traditional UI.
Image source: Google
This is why some new devices can weaken the screen or even no longer take the screen as the core. It's not that the screen is unimportant, but because completing tasks no longer completely depends on the interface. For example, "peculiar phones" like the iKKO Mind One Pro and Clicks Communicator have weakened the importance of the "screen" to different extents. The former chooses further portability, and the latter chooses to strengthen the input ability of prompts.
But no matter how the paths diverge, the underlying change is clear - when the entry point changes from "apps" to "intentions" and the interaction changes from "interface" to "conversation", the system established by smartphones around apps and GUI in the past is being gradually disassembled.
Conclusion
Is Amazon's decision to make a phone again due to the confidence given by AI?
It seems so on the surface. In a market dominated by the iPhone and Android for more than a decade, making a phone again is almost a hopeless task. But in the past two years, players with completely different backgrounds, such as Amazon, ByteDance, Rabbit, Rokid, iKKO, and Clicks, have all re - entered this field in their own ways.
This is not a "rational" thing in itself.
But the more fundamental change is that AI has changed the path for users to complete a task. When you can directly say a sentence to meet various needs, the step of "opening an app to operate" is no longer necessary.
This is the real reason why these companies dare to enter the "super - competitive" smartphone market. Although they seem to be making completely different products on the surface, they are all essentially answering one question: How will future human - machine interaction change through AI instead of apps and GUI?
From this perspective, Amazon's restart of phone - making is not an isolated event but a part of this wave of change. What it wants to do is not to recreate a Fire Phone but to try to place AI at the entry point.
As for whether this will succeed, it's hard to draw a conclusion now. But one thing is certain: when the entry point starts to shift from apps to intentions and the interaction gradually moves away from the GUI, the past operating mode of smartphones is no longer as stable.
This article is from the WeChat official account "Lei Technology". Author: Lei Technology. Republished by 36Kr with permission.