Google unveiled four new products in one go. Everyone underestimated its press conference: Its AI ambitions are showing.
Early on the morning of December 9th, the currently successful Google held an Android Show: XR. For the first time, it clearly outlined the device roadmap for Android XR and officially showcased the prototype of AI glasses developed in collaboration with Samsung.
However, if one only focuses on these aspects, this Android Show: XR is likely to be underestimated.
There was no mention of price, no stacking of parameters, and no attempt to make a "disruptive device" the center of the event. The entire event resembled a restrained and almost calm roadmap explanation. Google spent most of the time emphasizing the system integration of Gemini with Android XR, discussing development frameworks, APIs, and "which device forms Android XR will cover," avoiding placing the focus on a single terminal hardware.
In the official narrative, Android XR is no longer just "an Android version supporting headsets." Instead, it is a brand - new computing platform centered around Gemini.
Screenless Google AI glasses. The material has been accelerated. Image source: Google
A Low - Key Launch Event, a High - Profile New XR Blueprint
Google emphasized that Gemini will exist as the default intelligent layer in Android XR, spanning visual, voice, environmental perception, and interaction understanding. The role of Android XR is to provide a stable, scalable, and cross - device experience system carrier for this multimodal AI.
Centered around this, Google also laid out four product roadmaps at once:
XR headsets, AI glasses, wired XR glasses, and future true wireless XR glasses.
Different forms not only share the same set of system capabilities but also the same development stack. Jetpack, Compose, ARCore, and Play Services are still familiar tools for developers, just re - projected onto spatial computing and wearable devices. This is also the most pragmatic and easily overlooked aspect of Android XR:
It doesn't require developers to "transform into XR development." Instead, it hopes that XR can naturally become a new form of extension of the Android ecosystem.
At the strategic level, Google also demonstrated a familiar "old - school restraint." Android XR didn't attempt to copy Apple's top - down, hardware - centric approach. Instead, it continued a cooperation model similar to the Nexus era:
The system is defined by Google, and hardware exploration is entrusted to different manufacturers. Product forms can evolve in parallel.
The last swan song of the Nexus phone (in cooperation with Huawei). Image source: Google
Whether it's the Galaxy XR (Project Moohan) in cooperation with Samsung, the AI glasses planned for launch next year, or the wired XR glasses (Project Aura) in cooperation with XREAL, they are just "carriers" of Android XR, not its boundaries.
Under this "conservatism," Google's "ambition" becomes even clearer.
Four Devices, Four Roadmaps, with AI + OS as the Common Core
At the Android Show: XR, the first thing Google did was not to showcase a "device representing the future." Instead, it presented all the hardware forms that Android XR can support at once.
XR headsets, AI glasses, wired XR glasses, and the yet - to - appear wireless XR glasses. These four roadmaps form the complete device spectrum of Android XR.
What's important is not whether they mature simultaneously, but that Google clearly made a judgment for the first time: Android XR doesn't serve a single type of device. Instead, it aims to cover all head - mounted computing devices.
Among the already launched products, the XR headset remains the most complete form currently. Represented by the Samsung Galaxy XR (Project Moohan), it mainly plays the role of a "platform anchor." With complete spatial display, multi - window capabilities, and a mature tracking and interaction system, it provides a relatively complete reference for Android XR.
Image source: Google
This roadmap doesn't aim to explore the ultimate XR form. It's more like Google's basic plan to ensure that Android XR has a foothold in the XR field. What Google really values is the AI glasses.
Google is jointly developing two types of AI glasses with Samsung, Gentle Monster (a well - known Korean fashion eyewear brand), and Warby Parker (a well - known American online eyewear brand). One type has a display, and the other doesn't. But their common feature is to minimize the "system presence":
There is no complex interface, no immersive display, and almost no mention of computing power parameters. Instead, it emphasizes being worn on the body, low interference, and instant understanding.
Image source: Google
The third roadmap is the wired XR glasses Project Aura in cooperation with XREAL, which is expected to be officially launched next year.
This type of split - type AR glasses for viewing, which we are relatively familiar with, has the glasses body responsible for display and perception, while the computing power and battery are external. In the Android Show, it was directly defined as "wired XR glasses." For Android XR, this roadmap proves that the system, interaction, and application models can be established in non - headset forms.
Image source: Google
As for the wireless XR glasses, there is currently no more product and project information. This roadmap is clearly placed in the future, rather than using concept machines or demonstration videos to pre - consume people's imagination. To some extent, this "blank space" itself represents an attitude: before the heat dissipation, power consumption, weight, and cost requirements are simultaneously met, Google is not eager to let Android XR be bound by a certain generation of hardware form.
What really connects these four roadmaps is not the display solution or the ecosystem, but Android XR centered around Gemini.
AI Defines XR and is "Consuming" Devices and the OS
In actual experiences, whether it's the Galaxy XR, Project Aura, or the prototype of the AI glasses, many media that had early access didn't mainly mention "how smart Gemini is." Instead, they noticed a change in its position in the system. Gemini no longer exists only in the form of an "application" or an "assistant entry." Instead, it moves to a lower system level and takes on the role of understanding context, space, and task status.
The most direct manifestation of this change is the transformation of the interaction mode. On the Galaxy XR, users don't need to specify an object or repeat the operation path. As long as they point, look, and provide a simple intention description, the system can complete window management, content screening, or status switching. This type of experience can better answer the practical significance of AI in the real - world workflow than short - term Q&A.
Gemini's real - time gaming suggestions. The material has been accelerated. Image source: Google
On the AI glasses, Gemini is even the core of interaction. There is no learning cost and no mode switching. It only focuses on "what the user sees" and "what the user wants to know." Compared with XR headsets or wired/wireless XR glasses, the change in experience here doesn't come from XR technology but from AI being placed in a natural and restrained position.
By abstracting this layer of experience, it's not difficult to understand Google's overall strategy for Android XR - a very typical approach from the Nexus era: the platform is defined by Google, the core experience is guaranteed by the system, and the specific hardware forms are explored in parallel by partners. The key is that Google can grasp the core AI and operating system ecosystem without investing too much energy in hardware.
Image source: Google
This is very important for Google, which has just regained the "AI crown" and forced OpenAI to sound the "red alert."
In fact, the core battlefield of large AI models is rapidly expanding. Nick Thompson, the CEO of The Atlantic, recently revealed on TikTok that Sam Altman, the CEO of OpenAI, said at a lunch meeting that the core competition in AI is not just about models but also about devices and the OS operating system. Even Apple is a competitor for OpenAI.
It's not just Google and OpenAI that think this way. The emergence of Doubao Mobile Assistant and the "Doubao Phone" also shows that ByteDance shares the same view.
Image source: ByteDance
Looking back at Google, although this strategy seems a bit "conservative" as it doesn't create blockbusters and isn't eager to unify forms, the advantages are also obvious. Android XR won't be prematurely negated due to a failed hardware choice. In a stage where interaction is not yet finalized and AI is reshaping entry points, Google chooses to stabilize the system first and then gradually promote hardware development. This pace may not be exciting, but it shows enough patience.
Android XR Development May Become a New Paradigm
But are developers ready?
At the Android Show: XR, Google almost focused all key signals on the "development side." The SDK of Android XR is continuously updated. Mature tools such as Jetpack, Compose, ARCore, and Play Services are clearly included in the XR system. Even toolchains like Glimmer and Projected, which serve transparent display and multi - form devices, are not tailored for a single hardware but can run on different glasses and headsets.
It's not just Google that is taking action. Meta recently launched a toolkit for developers to connect wearable devices (mainly AI glasses).
Image source: Meta
Meta emphasizes integrating the capabilities of glasses into existing mobile applications. Developers don't need to write separate apps for Ray - Ban or Oakley glasses. Instead, in iOS and Android applications, they can obtain the camera, microphone, and sensor capabilities of the glasses through the SDK, making the glasses an extension of the perception and interaction of mobile applications.
This is a relatively pragmatic choice: first solve the scale and scenarios, and then discuss platformization.
Domestic manufacturer Rokid has a more long - term vision in this regard. Long before smart glasses became a hot topic, Rokid had already established a preliminary application ecosystem around AR glasses. After the release of the Rokid Glasses SDK this year, developers can directly develop native and independent applications for the glasses body (based on CXR - S) and also build control and collaborative applications for mobile phones (based on CXR - M). The latter is basically in line with Meta's concept.
Image source: Rokid
The difference is that Rokid promotes development from both the glasses and mobile phone sides simultaneously. This is a model that treats glasses as an independent computing terminal. The ecosystem is "thicker," but it also tests its ability to build an ecosystem.
Android XR's position is right in the middle of these two and also above them. Android XR doesn't require developers to choose between "native glasses applications" and "mobile phone extended capabilities." Instead, it tries to unify these two paths through the abstraction at the operating system level.
Simply put, an application can be an extension of Android capabilities to the glasses or a complete experience running in the XR space. The devices can be AI glasses, wired XR glasses, or even future wireless XR devices, as long as they all run on Android XR.
Google AI glasses with a screen. Image source: Google
More importantly, Gemini is placed in the default position of this ecosystem. For developers, AI is no longer just an optional capability but a system - level intelligent layer that is always present and can be called. Development doesn't have to start with "creating an XR application." Instead, developers can start from more practical problems and think about how to make applications understand the user