Understand the Android XR Launch Event in One Article: Google's "In-House Product" to Hit the Market Next Year
Early in the morning on December 9th, Google held a special XR Edition launch event, systematically presenting the latest progress in the XR field for the first time.
Google positions Android XR as "the industry's first unified platform for extended reality devices."
The core of this platform lies in its unified Android foundation, which can directly extend the experience advantages of the smartphone era to the XR field.
Image: Google plans to build Android XR into the first unified XR platform
To bring the Android experience to the XR platform, Google has partnered with Samsung and Qualcomm. Samsung is responsible for the technical architecture and products, while Qualcomm is in charge of the chips.
In the Android era, Google's in - house Nexus series followed a similar pattern.
According to Google's definition, the design philosophy of Android XR is to view extended reality as a continuous spectrum of devices rather than a single product form. In other words, the product forms are diverse.
As Google introduced, the hardware of Android XR includes not only powerful head - mounted devices that can provide a deep immersive experience, suitable for high - intensity interactions such as entertainment and gaming, but also lightweight and fashionable AI smart glasses that can be worn all day.
Between the two, there are hybrid - form devices like XREAL Project Aura that combine the features of both.
Google's "In - House" Android XR Products
To ensure the experience of Android XR, Google and Samsung lead the technical architecture.
In terms of design, as the first - generation products of the "in - house" series, Google has chosen a fashionable route. It has invited Warby Parker, a leader in the eyewear industry, and Gentle Monster, a pioneer design brand, to participate in product definition and design. There are two models planned to be officially launched in 2026.
One is an audio - version AI smart glasses that focuses on basic interaction. It is equipped with an advanced micro - speaker array, a high - definition camera, and an environmental microphone. Users can have conversations with Gemini through natural voice to achieve functions such as music playback, calls, and instant shooting.
The other further integrates a key technology on this basis - a screen, a miniature, low - power transparent display that can display information such as navigation arrows, message notifications, and real - time translation subtitles with high privacy at an appropriate position in the user's field of vision.
Since these two products are called smart glasses, their design philosophy is clearly defined as "first and foremost, they are glasses."
Therefore, the concept of the design partners is to make the complex sensors and computing modules invisible and integrate them into daily outfits.
The product experience is completely built around Gemini AI, aiming to provide a "context - aware" seamless assistance.
For example, through visual search, the glasses can identify objects the user sees and provide relevant information immediately, such as identifying product ingredients in a supermarket.
The real - time translation function supports natural face - to - face conversations, breaking down language barriers.
The context - aware reminders that combine geographical location and visual recognition can actively recommend scenic spots during a trip or guide users to the exact boarding point in a complex transportation hub.
In addition, the glasses also incorporate Google's latest multimodal AI capabilities. Users can complete shooting with just a voice command and use a lightweight image model similar to Nano Banana for real - time creative editing, such as adding interesting virtual elements to group photos, making smart interaction full of fun and personality.
New Category of "Wired XR Glasses"
Since the product forms are diverse, it can't just be smart glasses. So Google has also partnered with XREAL to launch a new device category between head - mounted displays and AI glasses - wired XR glasses.
The first product, Project Aura, has a split - type architecture at its core - glasses + computing device - and is expected to be officially launched in 2026.
Users wear a pair of lightweight display terminals that are similar in form to ordinary glasses and use an optical see - through solution. It is connected via a cable to an independent, portable computing core called the "Puck."
The Puck not only houses the main computing unit and battery but also has its surface available as a touchpad, achieving a balance between computing power and wearing comfort.
The product experience of Project Aura is not new and can be summarized as: high - precision overlay of real - world images and virtual window content. The interaction method is intuitive, and users can directly manipulate in space with their hands.
As Google demonstrated, Project Aura can be used as an independent XR device with support from Google's application ecosystem, or as an external screen for mobile devices. Of course, it also integrates Gemini's AI capabilities.
Samsung Binds with Google through Head - Mounted Displays
The cooperation between Samsung and Google on Android XR not only includes helping to build Google's "in - house" products but also involves Samsung's head - mounted display product, Galaxy XR.
Actually, Samsung launched this product in October, so this time it's mainly an upgrade of three functions.
First, based on the PC Connect function, the barrier between the head - mounted display and personal computers has been completely broken.
Users can now seamlessly "pull" the entire desktop of a Windows PC or any single application window into the wireless virtual screen of the head - mounted display wirelessly and with low latency.
Users can use professional software such as Photoshop and CAD for creation in a fully immersive environment without physical boundaries. It also opens up a new way of entertainment - streaming high - performance PC games to the head - mounted display.
Second, to solve the social awkwardness of users "disappearing" during video calls while wearing a head - mounted display, Google has launched the Likeness function. This technology allows users to create a highly realistic digital avatar of themselves. The avatar is not a static model but can accurately mirror all the user's subtle facial expressions, lip movements, and natural head rotations in real - time through the sensors of the head - mounted display.
In video conferences such as Google Meet, participants will see a lifelike and expression - synchronized "you," greatly enhancing the realism and presence of virtual social interactions and making remote collaboration and communication more natural and friendly.
Third, to expand the usage scenarios of the head - mounted display to mobile environments, a new Travel Mode has emerged. It uses advanced algorithms to intelligently compensate for the interference caused by external movement, significantly stabilizing the virtual image in the user's field of vision and reducing the dizziness caused by image jitter.
In addition to the above three immediately available functions, Google has also announced a core technology planned to be launched next year - system - level automatic spatialization. At the system level, it can dynamically convert any ordinary 2D application interface into a 3D immersive experience with stereoscopic depth and a sense of space in real - time and automatically.
This move does not require developers to make separate adaptations, which is expected to greatly enrich the content library of the XR ecosystem instantly and "spatialize" the entire digital world, indicating a future where immersive interaction will move from specific applications to a universal experience.
Update of the Development Toolkit
For the developer community, Google has simultaneously released the Android XR SDK Developer Preview 3 (DP3).
The core of this update is the official opening of full - fledged development support for AI smart glasses for the first time, marking the official extension of the Android XR platform from immersive head - mounted displays to the field of all - day wearable smart devices.
To help developers efficiently build future - oriented eyewear applications, Google has launched two dedicated development libraries:
Jetpack Glimmer is a modern UI toolkit designed specifically for transparent displays. Built on Compose, it provides carefully optimized interface components (such as cards and lists) that conform to the interaction logic of glasses, enabling developers to easily create clear, elegant visual experiences that cause minimal interference to the real world.
Jetpack Projected is a basic API library that aims to solve the initial challenges of ecosystem expansion. It allows developers to quickly "project" the core functions and business logic of existing Android mobile applications to the glasses platform through a simple interface, manage audio and camera access, thereby achieving a smooth transition from the mobile end to the glasses end and significantly simplifying the development process.
In terms of augmented reality capabilities, the SDK integrates ARCore for Jetpack XR and adds a key geospatial API. This enables developers to use the global - scale visual positioning service to build applications based on precise geographical locations and real - scene recognition, such as providing real - time AR walking navigation.
To build an open and vibrant ecosystem, Android XR follows the OpenXR industry - wide open standard.
In the field of game and high - immersion content creation, Unreal Engine can now use its native OpenXR support to develop for Android XR. Google also announced that its official exclusive plugin (which will integrate more in - depth platform features such as hand tracking) is expected to be released in early 2026 to further unleash the creative potential of the engine.
Meanwhile, the open - source Godot engine has provided full support for Android XR through an official plugin in its v4.2.2 stable version, offering a powerful and flexible option for independent developers and small teams.
A vivid developer case comes from Uber, which demonstrated how to use the new tools to reshape the travel experience for AI smart glasses with displays.
When passengers arrive at the airport, the transparent display of the glasses will show the trip status and estimated arrival time, allowing users to access information without looking down. Through the integrated AR path guidance, virtual arrows will be clearly overlaid in the real - world terminal environment to guide users to the exact boarding point.
Finally, the vehicle information (such as the license plate number) will be directly displayed in the field of vision. Users can even conveniently call the driver through the glasses. The whole process achieves a seamless integration of information and the physical world, allowing users to keep their heads up and hands free.
This article is from Tencent Technology, author: Jin Lu. Republished by 36Kr with permission.