StartseiteArtikel

AI has come down from the pedestal and entered the gravity well of hardware. Only by doing "boring" things can you make money | Observations from CES 2026

36氪的朋友们2026-01-12 07:32
For consumers, the era of "magic" is over; but for the industry, the era of "business" is just beginning.

Standing at CES in 2026 and looking back on January 2024, I vividly remember experiencing the CES at the peak of the AI frenzy. That year, Samsung spared no expense to buy the largest billboard in the central hall of LVCC, and the slogan "AI for ALL" resounded everywhere.

Every major company was shouting at the top of their lungs about their grand visions and investments in AI.

However, at CES in 2026, when I walked into the main exhibition halls of those home appliance and automotive giants again, although AI still existed, it had retreated from the center of the stage to the inconspicuous small print on the product introduction signs. It was at that moment that a strong sense of déjà vu hit me.

LG was still promoting their AI refrigerators, demonstrating the same old logic of "recommending recipes based on ingredients." The AI microwave ovens still sported huge screens, stuffed with functions that were neither useful nor worth discarding. The manufacturers' attitude towards AI was clearly no longer sincere. At LG's TV booth, the staff couldn't even explain the logic behind the so - called AI upscaled picture quality.

Panasonic's exhibition hall made me feel like I had traveled through time. Looking at the health magic mirror that makes facial expressions in front of the mirror and uses AI to measure skin quality, I even had a momentary sense of disorientation because this was clearly one of the hottest concepts in 2024. Ironically, the original competing products from that year were still on display at the Venetian Expo this year, with similar functions, yet still touted as "Products of the Year at CES 2026."

This sense of déjà vu reached its peak at the AFEELA booth. This concept car, jointly developed by Honda and Sony and attracting countless eyes in 2024, looked exactly the same as it did two years ago. Standing in front of the car, I was in a daze, not knowing what year it was.

Of course, there were changes, but they were rather disheartening. For example, Samsung's once - ambitious home AI central robot, Ballie, was nowhere to be seen this year. Meanwhile, its old rival, LG, upgraded the cute Q9 from that year into the trendy humanoid robot CLOiD with hands.

Although I knew very well that these two robots were just "concept suits" for home appliance giants to materialize their AI visions, their changes and disappearance still made me feel a bit disappointed.

This marks that the all - out AI frenzy has indeed subsided.

This is not just my personal sensory experience but also the common reality faced by the entire industry in 2026: we are in the "Trough of Disillusionment" on the Gartner Hype Cycle. The crazy hype from 2023 to 2024 has ebbed, leaving behind on the beach a large number of hardware remains that failed to deliver on their promises and users' deep fatigue with the so - called "AI magic."

Those large enterprises that had been stumbling for a few years finally realized that it was still too early to achieve truly meaningful AI for consumers with the current product forms.

Another important area claiming to be AI - powered is embodied intelligence. The appearance of the Chinese robot legion at CES is almost an unavoidable topic for the media. But in my opinion, this is just a replay of the World Artificial Intelligence Conference (WAIC) in Beijing last August, except that the stage has moved to Las Vegas and the scale is smaller.

I once pondered what the connection was between CES, a consumer electronics show, and these robots that can only repeat a few patterns without remote control. It wasn't until I saw Vita Power's Vbot robotic dog pulling a camping trailer around the venue that I realized these robots might finally have found a use.

On my way back, when I was struggling with heavy luggage at the airport, the thought that flashed through my mind was: if the price is right, I really want to buy a Vbot to be my fully automatic laborer.

However, as a consumer, I still can't understand what else these robots can do for me.

Of course, except for the robotic vacuum cleaners. The Roborock G - ROVER robotic vacuum cleaner, which was very popular at the exhibition this year, can grow legs to climb over steps when it sees them. I can't help but wonder if your first human - like robot is probably cleaning the floor now.

This is the overall picture of the main exhibition area at CES this year: AI is spreading in old products in a rather powerless and useless way.

Only when I went to the booths of startups at the Venetian Expo area was the word "AI" once again placed in the spotlight. For these startups, this may be their only weapon to break into the mature and competitive market.

But the trend has changed. In 2024, we saw a bunch of AI hardware such as Rabbit R1 and AI Pin, which claimed to "revolutionize the mobile phone." They were ambitious but ultimately failed, becoming one of the numerous failed challengers in the history of technology.

This year, there were no new products with such grand narratives at CES. Entrepreneurs no longer talk about revolution but quietly integrate AI into specific niche scenarios, trying to carve out a piece of the market from the gaps left by the giants.

Since the large companies are showing signs of fatigue and retreat, let's turn our attention more to these innovative enterprises and see what kind of real future they are trying to build after the bubble has burst.

01

Silent Integration

In the Venetian Expo area, if there has to be a category with an almost 100% AI - content rate, it must be health monitoring and sleep monitoring. However, their integration of AI is not for interaction but for more thorough monitoring.

Thanks to the qualitative change in multi - modal understanding by large models this year, the complex scattered information that could not be comprehensively processed in the past can finally be handled all at once. As long as more mature new sensors are stacked, the collected data will be more abundant and three - dimensional.

For example, many sleep technology companies at CES 2026 abandoned the concept of wearables. Take Sleepal's AI Lamp. It combines a millimeter - wave radar for capturing body movements, a thermal sensor for monitoring heat distribution, and an acoustic sensor for auscultation. It has built a high - fidelity "environmental digital twin."

In this twin world, AI's perception ability has been greatly enhanced. It can "see" your heartbeat and detect every turn. This is the confidence for AI to remain silent: it has mastered all the information needed for decision - making before the user even speaks.

Another silent strength of AI lies in the processing of this massive amount of data.

In the past, the NLP analysis mechanism for processing such complex medical - grade information was a moat built by giants like Apple at a high cost. But now, large language models (LLMs) have flattened this threshold, and opportunities (or gaps) for startups have emerged. This is why new AI products are springing up like mushrooms in the sleep health field, where extreme medical accuracy is not required.

At first, I thought this was quite simple, but after in - depth conversations with several brands, I found that the threshold was actually not low. Since LLMs have no memory, you need to refine scattered data into information that meets the requirements of long - term analysis and then use agents that conform more to medical logic for centralized processing. This often involves a whole complex chain of agents.

(Lunawake's long - term AI sleep monitoring and assistance products)

Anyway, this is much faster than hard - programming a set of logic. Moreover, this non - contact multi - modal data extraction has finally solved the paradox of "wearing a device to monitor sleep but actually preventing you from falling asleep."

Of course, since this all - knowing eye is open, why not take action directly?

With the help of the Internet of Things (IoT), the category of intelligent sleep home appliances has been officially established. Based on that all - knowing "environmental digital twin," for example, SleepBot's AI analyzes your needs and then mobilizes all the devices in your home: increasing oxygen content to help you sleep better or adjusting the balance of your pillow by inflating it to stop snoring physically. A complete closed - loop from monitoring to intervention is thus completed.

All of this happens silently. The only interaction may only exist in functions like the "AI sleep coach." Although you can ask the AI "how to improve my sleep quality," I really think it's more pleasant to let it do the work directly rather than listen to its lectures.

"Silent improvement" is the best way to use AI in this category.

In fact, "more sensors + AI analysis" has almost become the mainstream underlying logic for AI to intervene in traditional industries at present. At the exhibition site, I saw a series of innovative companies working based on these two aspects, such as those for water pollution analysis and pedestrian flow analysis.

The pet market has taken this logic to the extreme.

I saw various AI products that monitor pets' eating, drinking, body temperature, sleeping, and even which dog they fought with. It's extreme because if you used this on a person - filling your home with cameras and sensors to monitor your every move and give advice - you'd probably smash the devices.

But in our subconscious, pets are supposed to be "controlled." So, it doesn't matter.

There's even a pet hardware company that invented a cat - face recognition feeder. It can accurately identify which cat shouldn't eat and lock the food bowl when that cat approaches. If a person were managed like this, they'd smash the machine.

Another group we subconsciously think should be controlled is children.

When I talked to the person in charge of a domestic AI educational companion robot company, he told me that if the robot hears a child talking about difficulties at school, it will pass this information on to the parents.

The intention sounds good, but it will only make children feel from a young age that friends can stab you in the back.

At the end of the conversation, the person in charge also revealed that the product sells well because many parents hope that children will stop using their phones all day and learn more about these future technologies.

At that moment, I suddenly realized that in this context, there may be no real space for true companionship in children's worlds.

02

Using Semi - Native Approach to Break into the Blue Ocean Market

Looking at the surviving AI hardware at CES this year, apart from AI glasses, which still need several years of refinement to balance price and performance and are like a "future bet," the only products that have truly stood out are AI voice recorders and AI personal recorders (such as Looki).

They are not new species that emerged out of nowhere but rather AI - enabled expansions of existing products.

By leveraging the relatively mature single - point capabilities of AI to empower a mature hardware category, they bring about a qualitative change in user experience.

They have survived because they effectively address the unmet pain points of old products. The core purpose of recording is for subsequent organization and summarization, and AI voice recorders perfectly meet this need. The same goes for personal recorders. The purpose of shooting is for recollection and sharing, and AI's multi - modal capabilities can accurately locate highlight moments and automatically edit videos, which is undoubtedly a real need.

Compared with radical AI Native products like AI Pin, which are born to carry AI capabilities and hope to completely