The "Lobster" is booming, and there are opportunities in the hardware industry. Here are 10 latest hardware projects.
Text | Zhou Xinyu, Zhong Chudi
Editor | Zhou Xinyu
If you visit Shenzhen's Huaqiangbei after the Spring Festival, you're likely to witness the following scene:
At the second - hand device wholesale stalls in SEG Electronics World Building, within just a dozen minutes, 3 or 4 customers with different accents inquire about hardware compatible with "Lobster". An all - in - one machine deployed with "Lobster", elegantly called a "shrimp tank", has seen its price soar from just over 1,000 yuan to 2,000 yuan recently.
Huaqiangbei is a window to observe the entrepreneurial trends in the technology industry. AI glasses, voice recorders, dolls... Here, everything can be made compatible with "Lobster". A technology industry insider told us, "Its appearance in Huaqiangbei means it has reached the consumer - grade level."
△ A gathering of thousands of people for Lobster hardware in Shenzhen in early March. Image source: inno100.
Lobster, formally known as OpenClaw, is currently the world's hottest open - source Agent framework and the trigger for this wave of hardware fever. With the high - profile entry of major companies like Tencent and the introduction of support policies in various regions, this local Agent that can work for users 24/7 has quickly become well - known to the general public.
When an immature technology rapidly moves from the hands of geeks to the mass market, the pain points of implementation caused by its "immaturity" will be infinitely magnified.
For example, Lobster has a high deployment threshold, cannot call cloud models offline, and is prone to losing control. Its high cost is also a reason why many people shy away from it. During task execution, the amount of Tokens consumed by Lobster when calling large - model APIs is beyond imagination. Many users can burn hundreds of dollars in a single day.
Hardware has become the best solution to the main pain points.
By packaging Lobster and Skills (skill documents) and locally deploying the model on external hardware, not only can the complex deployment process be omitted, but also no additional Token consumption will occur through local model calls.
One of the most typical examples is the AI Infra company Tiiny AI, founded in early 2025. In March 2026, their product, Tiiny AI Pocket Lab, a box that encapsulates a large - scale model (supporting up to 120B (int4)) and is only the size of an iPhone 17 Pro Max, exceeded $1 million in sales on the North American crowdfunding platform Kickstarter within 5 hours of its launch. As of now, the crowdfunding amount has reached as high as $2.5 million.
Tiiny AI crowdfunding.
"Previously, the user group was basically composed of geeks. In the past few months, many Lobster users have flocked in," Eco Lee, the vice - president and commercialization director of Tiiny AI, told us. Many users are attracted by the feature of "running the model locally". They only need to pay for the hardware once, and then Lobster can call the model offline without limit.
The FOMO (fear of missing out) sentiment quickly swept through the primary market. Eco mentioned that Tiiny AI receives invitations from twenty or thirty investment institutions on average every week recently.
He Ming (a pseudonym), an investor from a US - dollar fund, contacted more than a dozen hardware startup companies in a week in early March. She also attended five or six offline Lobster hardware exchange meetings herself. "Our investment strategy in the first half of the year is to invest in Agent Boxes," she said.
In her view, this industry's FOMO about hardware is accompanied by a change in perception: There is no moat for AI applications, and software has even less of a moat than hardware.
Coding tools like Claude Code and Lovable have significantly lowered the threshold for software development. However, to enter the hardware arena, entrepreneurs need experience in collaborative software - hardware development and supply - chain resources.
As Guo Yi, the person in charge of the intelligent hardware ecosystem at Mianbi Intelligence, said, an AgentBox is not simply putting a hardware shell on software. Instead, it is about creating an intelligent experience close to that of the cloud in an environment with "tighter resources and more constraints".
On March 19, 2026, Mianbi Intelligence released the Lobster intelligent hardware product, EdgeClaw Box. Guo Yi told us that simply enabling the local operation of edge - side models requires in - depth adaptation between the hardware and various edge - side models. "It's not just about 'getting it to run', but about 'running fast, running well, and running stably'."
Regarding the combination of Lobster and hardware, we found that the startup projects in the market can be roughly divided into three categories:
Plug - and - play Lobster boxes: Deploy Agent, Skill, and models on hardware to achieve "plug - and - play" for devices such as computers and mobile phones, solving problems such as difficult deployment and high Token consumption;
Multimodal sensing: Deploy Lobster on hardware devices equipped with sensors such as cameras and microphones to enrich the ways Lobster acquires user instructions and context;
Management platform for multiple devices: Use Lobster as the control platform for multiple hardware devices. Lobster understands and analyzes user instructions and drives different hardware devices to meet user needs.
However, where there is a wave, there is bound to be some foam. An investor received dozens of business plans (BPs) in the name of "Lobster all - in - one machines" but didn't invest in any of them.
He found that most of these projects in the market are "just about making money, not real entrepreneurship". They make a quick profit during the boom and then quickly exit. He described the typical profile of such "entrepreneurs": They were involved in AR glasses around 2020, switched to AI glasses in 2024, and now they're back with "Lobster glasses".
Another phenomenon that makes investors both happy and worried is that the speed at which large companies have entered the fray in this wave is unexpected.
"I've never seen large companies respond to new technologies so quickly," He Ming sighed. Since March 2026, large companies such as Tencent, ByteDance, DingTalk, and Baidu have all launched one - click deployment for Lobster and more cost - effective model subscription plans.
Moreover, mobile phone manufacturers are also eager to get involved. On March 19, Xiaomi announced that its first mobile phone with Lobster, Xiaomi miclaw, has entered the closed - beta testing phase.
The entry of giants is beneficial in that large companies can quickly educate the market. However, hardware entrepreneurs have to compete with large companies and mobile phones for entry points.
A Wen, the co - founder of the Hangzhou AI company "Shi Zhi Gui Ji", recently launched a walkie - talkie, EinClaw (currently in the prototype stage), that can directly conduct voice interactions with Lobster. In the comment section of his Xiaohongshu account, one comment almost flooded the page: "Don't you have a mobile phone?"
What exactly are the barriers to intelligent hardware? In interviews, the answers from several entrepreneurs and investors were: Product definition, application ecosystem, and IP.
"Just like Manus and OpenClaw, their strength lies not in barriers but in the definition of the Agent product paradigm," a hardware entrepreneur said. "All subsequent similar products can only strive to benchmark and be the 'OpenClaw in a certain field'."
The competitiveness of hardware lies in software, which is another point frequently mentioned by entrepreneurs. The core members of Tiiny AI made an analogy with NVIDIA's development platform CUDA. "Hardware devices need to keep the development ecosystem open so that developers can come in, develop more functions, and improve the product."
The example A Wen gave was the AI voice recorder Plaud, which has sold more than one million units. "The advantage of Plaud is not the card - shaped voice recorder hardware itself, but the large number of high - quality templates with rich scenarios built in, which are difficult for other manufacturers to copy."
Facing the battle for entry points with mobile phone manufacturers, Li Dahai, the co - founder and CEO of Mianbi Intelligence, believes that there's no need to engage in a meaningless fight for the 'general assistant' role, which is often a red - ocean market:
"Hardware entrepreneurs are better off betting on things that mobile phones are naturally not good at, such as heavier workloads, clearer hardware isolation and permission boundaries, and professional processes and toolchains for developers and industries."
Establishing an emotional connection with users through IP is another strategy for hardware entrepreneurs.
The recognition of emotions by AI, especially the recognition of human emotions in voice interactions, is A Wen's recent research direction. In his view, with the connection of more sensors, Agents are expected to have more interactions in the physical world with users, laying the foundation for emotional companionship.
In fact, He Ming admitted to us that currently, people still have a "gambler's mentality" when investing in hardware. She can't be sure whether the popularity of Lobster is just a temporary fad or a real - demand explosion.
The fiasco of the AI hardware wave in 2024 still gives her the jitters. Most of the glasses, bracelets, pins, and necklaces that performed well on Kickstarter didn't survive the following spring.
However, many entrepreneurs are optimistic. In Eco's view, the continuous evolution of edge - side large - model capabilities and the leap in edge - side chip computing power will drive edge - side intelligence to reach an explosion point in the next 1 - 2 years and become the mainstream interaction entry point in the AGI era.
Li Dahai told us that as a social form, Lobster may cool down, but the user needs amplified by it are constant.
What's important is that through Lobster, entrepreneurs should see the more fundamental problems of users: how to meet the needs of keeping data off - cloud and running Agents offline. "These constraints are structural and won't disappear because of the popularity of a certain product form," Li Dahai said.
Through interviews with multiple industry insiders and the collation of public information, we've compiled a list of 10 latest "Lobster + hardware" projects:
I. Plug - and - play Lobster boxes
Tiiny AI Pocket Lab
One - sentence introduction: Tiiny AI Pocket Lab is a plug - and - play AgentBox that enables high - performance large - language models to run efficiently on consumer - grade chips locally.
Product overview: In terms of appearance, Tiiny AI is a box the size of an iPhone 17 Pro Max, weighing only 300g, which can be carried in a pocket.
Through USB, Tiiny AI can be connected to computers, tablets, and mobile phones of different performance levels. It can run large models and Agents locally without taking up additional device memory or incurring cloud - based Token consumption.
To protect data security, the system defaults to storing user data, credentials, and workflows locally.
Notably, Tiiny AI doesn't use GPUs commonly used in high - end AI computers, such as those from NVIDIA or AMD. Instead, it uses a heterogeneous computing architecture of SoC (system - on - a - chip) + dNPU (dedicated AI acceleration unit) to run large - language models with up to 120B parameters locally.
△ After connecting a MacBook Neo (equipped with an A18 Pro mobile phone chip) to Tiiny AI, technology blogger Alex Ziskind successfully ran GPT - OSS - 120B. Image source: YouTube@Alex Ziskind
The technology behind running a 120B - level large model efficiently on consumer - grade chips is the inference acceleration engine PowerInfer proposed by the founding team.
In essence, PowerInfer is a high - speed inference engine for local deployment of large - language models, enabling large - language models to run at high speed on consumer - grade PCs.
In 2024, the team open - sourced a running example of PowerInfer on GitHub. With a single NVIDIA RTX 4090 GPU, a large model with 175B parameters could be run, and the speed was 11 times that of traditional solutions.
Why choose a 120B - sized model? The team told "Intelligent Emergence" that 100B parameters are the critical point for local AI to transition from the "toy - level" to the "productivity - level". "We chose to bring the complex reasoning ability of the GPT - 4o level from the cloud into our pockets for the first time at this watershed."
They found that models around 100B in size already have reasoning logic, and their tool - calling ability can meet 70% - 80% of people's usage scenarios.
Currently, Tiiny AI OS supports more than 50 mainstream open - source models and more than 100 Agent tools, such as OpenClaw.
Business model: The product was crowdfunded on Kickstarter at a price of $1,399. The crowdfunding amount exceeded $1 million within 5 hours of its launch. The product is expected to be delivered in August 2026.
In the future, Tiiny AI plans to open up its software ecosystem to support users in importing more open - source local models to meet personalized needs.
Team overview: Currently, the team has about 30 members. The founding team members are from Shanghai Jiao Tong University, the Massachusetts Institute of Technology, Stanford University, the Hong Kong University of Science and Technology, as well as Intel and Meta. The high - speed inference engine PowerInfer proposed by the team has received more than 9k stars on GitHub.
The other members are from companies such as Apple and Xiaomi.
Financing situation: It is in progress.
EdgeClaw Box - Mianbi Intelligence
One - sentence introduction: A box that emphasizes data security and can run Lobster even without an internet connection.
△ EdgeClaw Box. Image source: Official supply
Product overview: EdgeClaw Box comes with Mianbi's self - developed Agent framework EdgeClaw, common Skills, and a local small model.
Data security protection is the most emphasized selling point of EdgeClaw Box. By implanting Hooks (code inserted at key nodes to perform additional tasks) during the execution of OpenClaw, EdgeClaw can automatically classify user messages, tool - calling parameters, and Agent outputs into three levels, S1 - 3, from the lowest to the highest level of sensitivity:
Tasks at the S1 level can be processed in the cloud. For S2, data needs to be desensitized before being sent to the cloud. Tasks at the S3 level can only be processed offline locally by physically isolating sensitive data.
Meanwhile, through cloud - edge collaboration, EdgeClaw can process tasks offline locally and save Token consumption. For example