Author: Deep Web Tencent News
Just as OpenClaw, driven by its enthusiasm for “shrimp farming” and controversy over “shrimp killing,” has become a top trend in the AI field, leading smartphone manufacturers focused on edge AI can no longer hold back and are rushing to deploy and “tame” their own Claw.
On March 6, Xiaomi’s mobile agent—Xiaomi miclaw—began limited internal testing via invitation code, becoming the first domestic smartphone manufacturer to internally test “Lobster.” Following that, Huawei, Honor, OPPO, and others announced internal testing of Claw.
Among them, Huawei announced that Xiao Yi added OpenClaw mode and then released Xiao Yi Claw Beta; Honor launched the “Honor Lobster Universe,” supporting one-click shrimp farming on PC and tablets, with future plans to support other ecosystem devices’ lobster access; OPPO’s ColorOS design director Chen Xi showcased some features of Xiao Bu Claw on social media and stated, “Xiao Bu Claw still has security and other issues to resolve.”
In other words, smartphone manufacturers’ “shrimp farming” is currently mainly in the internal testing phase, with no clear timeline for large-scale release.
For example, Xiaomi’s miclaw is currently only available for a small closed beta on the Xiaomi 17 series, Xiaomi 15S Pro, and Redmi K90 series. Users with an invitation code can update their system and use the Xiaomi miclaw app. “During the testing period, there are no plans to charge,” said Lu Weibing, Xiaomi Group Partner and President.
Regarding mobile “lobster” deployment by smartphone makers, an industry insider revealed, “OpenClaw is essentially an open-source framework that includes third-party Skills and plugin ecosystems, and can also call various large models. For ordinary users, deploying OpenClaw has a high entry barrier, but for phone manufacturers, there is no technical difficulty—what’s challenging are permissions, user data security, and legal compliance.”
“Mainstream phone companies face hundreds of millions of ordinary users. Any AI feature launched must undergo thorough validation to ensure a mature, safe, and stable experience before release,” a staff member from a phone company said.
Large model vendors are eager to deploy “lobster,” which can be simply understood as a “computing power monetization” business—making agents call models more frequently and perform complex tasks, thereby consuming more tokens and directly boosting API revenue.
However, this logic is difficult to implement in the mobile industry. After spending thousands or even tens of thousands of yuan on a device, users are rarely willing to pay extra for each specific task. Since direct profit from “selling tasks” is not feasible, why are leading phone manufacturers still willing to bear the costs of computing power and tokens for internal testing of mobile “Claw”?
One reason is that, on the path of traditional mobile AI assistants evolving into “personal agents,” OpenClaw is approaching the ideal form of a “super assistant.”
Unlike passive voice assistants of the past, OpenClaw is more like an always-online “digital employee,” allowing ordinary users to truly feel the potential of AI replacing human labor.
From the underlying logic of OpenClaw, its core value lies in strong “autonomy.” It breaks through the boundaries of chat boxes; as long as it is equipped with appropriate Skills (skill packs) and authorized with enough tokens, OpenClaw can remember user habits and tasks, autonomously plan steps, call tools, and operate software until delivering the final result.
However, to truly tame this cloud-based “autonomy” onto a mobile device with limited space, relying solely on app-layer overlays is insufficient. It requires deep, bottom-up reconstruction of the operating system by the manufacturer.
In terms of implementation, both Huawei Xiao Yi Claw and Xiaomi miclaw have chosen to integrate as “system-level applications.” This approach essentially packages discrete software functions, system permissions, and even cross-device capabilities into Skills that agents can call, then connects them through a self-developed reasoning-execution engine.
For example, Xiaomi miclaw integrates over 50 system tools and ecosystem services to build a closed-loop “perception-reasoning-execution” engine. When given a user command, the engine autonomously disassembles steps, matches tools, determines parameters, and iteratively refines based on execution results until the task is completed.
Huawei’s Xiao Yi Claw is built directly on the HarmonyOS base. “Xiao Yi Claw has system-level permissions (no third-party app jumps needed, directly calling underlying functions), full-scenario collaboration (seamless interaction between phone, PC, car system, smart home), and data security isolation (local processing of user privacy data),” according to Huawei insiders.
However, deploying “lobster” on smartphones faces challenges beyond technology and ecology. It also requires proper handling of sensitive data under security and compliance standards, breaking down barriers between apps and platforms, and even restructuring the entire industry’s profit distribution model.
“To deploy lobster on users’ frequently used phones, ensuring information security is the most important,” emphasized a staff member from a phone manufacturer.
Concerns about information security are not unfounded. Due to the default weak security configuration of OpenClaw, system control can be easily compromised by attackers, with risks such as prompt injection, misoperations, and malicious plugin injection already emerging.
Facing these security risks, security governance has become a critical red line for large-scale deployment of “lobster” by manufacturers.
For example, Xiaomi miclaw has directly “castrated” all tools related to transfers and orders at the code level to prevent the agent from executing high-risk operations like payments in the cloud. This means, without explicit user confirmation such as fingerprint verification or password input, any fund transfer actions cannot be triggered, fundamentally preventing automatic deductions.
Approaching the “super assistant” ideal through underlying system deployment of “lobster” is just the surface reason for manufacturers rushing to “raise lobsters.” The deeper game is that as users become accustomed to “just speak and get things done,” the old order of mobile internet—centered on apps and controlled by app store distribution—begins to loosen.
As Nvidia founder Jensen Huang said, “Mac and Windows are the operating systems of personal computers, while OpenClaw is the operating system of personal AI.”
In the PC era, whoever controlled the OS controlled the ecosystem entry point. In the AI era, this rule still applies, but the entry battle has shifted to intelligent agents.
Imagine if users get used to solving all problems via third-party agents (like web-based or standalone OpenClaw apps), smartphones could become mere “hardware platforms.”
With internet giants deploying mobile lobster, this sense of crisis among phone manufacturers is evident.
Just as manufacturers announced their own mobile lobster, internet giants like Baidu and Alibaba quickly took action, launching free internal testing of mobile lobster.
On March 12, Baidu launched the “Red Finger Operator” app on Android, allowing users to experience mobile AI assistant capabilities directly—such as hailing taxis and ordering takeout across apps. The next day, Alibaba Cloud released the mobile OpenClaw “Lobster”—JVS Claw—focusing on “plug-and-play” functionality, enabling users to operate apps, handle files, and complete complex tasks in a secure cloud environment via simple natural language commands.
Regarding deployment of “lobster” on phones by manufacturers and internet giants, IDC China research manager Guo Tianxiang said, “Currently, the practical value of ‘raising lobsters’ on phones is limited. The key bottleneck is that calling third-party apps still faces API authorization issues. Forced calls could lead to situations like the previous Doubao phone, where third-party apps disable the feature.”
Learning from the Doubao phone experience, Huawei, Xiaomi, and others are prioritizing validation within their own closed ecosystems when deploying mobile lobster.
For example, Xiaomi miclaw currently focuses on verifying large model capabilities across “full ecosystem” tasks involving people, cars, and homes; Xiao Yi Claw also prioritizes collaboration among Huawei phones, tablets, and other proprietary devices.
However, operating “lobster” within a relatively closed ecosystem can avoid some risks but also limits its capabilities, as users’ high-frequency needs often involve third-party apps like WeChat and TikTok.
To balance security, compliance, and full functionality, manufacturers are not abandoning cross-application collaboration altogether but are exploring more cautious, controlled technical paths.
Regarding third-party app collaboration, a Xiaomi engineer revealed that Xiaomi miclaw mainly achieves integration through two industry-standard methods: one is via Intent-driven (SendIntentTool) to launch apps or trigger specific actions; the other is through app adaptation using their AppTool SDK (based on AIDL protocol), enabling deeper function calls and task coordination, with third-party apps able to proactively push notifications to trigger tasks.
Deploying their own “lobster” at the system level is a crucial step toward the evolution of smartphones into “AI phones.” However, for manufacturers eager to seek incremental growth in the AI wave, the primary challenge in building a super intelligent agent is cost.
Local deployment of “lobster” is not just a simple software upgrade; it requires hardware upgrades to processors, storage, etc. The high-frequency inference and real-time response of large models demand higher NPU computing power on the SoC, significantly increasing the specifications for RAM and storage chips.
“Running large models on phones is limited by storage space, power consumption, and other technical factors. A 1-billion-parameter model takes about 1GB of RAM, 7-billion parameters require 4GB, and 13-billion parameters need 7GB,” said a director from a top mobile AI solutions team.
Currently, storage chip prices are rising, and each GB of memory upgrade directly cuts into device profit margins.
Even more challenging than the initial hardware investment are the ongoing operational costs once the mobile lobster is activated. On PCs, each task consumes real tokens and computing costs. The recent news that “earning 20,000 yuan a month can’t afford a lobster” vividly illustrates this “cost anxiety.”
“Before using ‘lobster,’ you need to think carefully about what you want to do with it,” explained Feng Nian, founder of a MCN agency. “In video production, the token consumption for editing versus generating videos varies greatly, but many beginners can’t tell what lobsters can actually do.”
Feng Nian shared a real example: their team mainly uses Mac mini 4 to deploy OpenClaw for assistance. Specifically, “lobster” is responsible for generating scripts for local hotspot-based store visit videos—some scenes are shot with real people, others generated with AI (like Seedance2.0 or Sora2). Lobster controls the Mac mini to edit videos and calls Sora2’s interface to generate content. Some steps are done manually to save costs, others are more economical with AI. In a day, they produce about 12 original and mixed videos, with token costs around 15 yuan.
“The core decision is balancing token and computing costs with the wages of junior editors,” Feng Nian added. “Deciding which tasks to delegate to ‘lobster’ and which to keep manual is key to reasonable use. Unfortunately, many companies are just following trends and showing off, without real productivity.”
While 15 yuan per day in token costs may seem low, it adds up given the massive user base of smartphones. When hundreds of millions of users are used to “buy hardware and get free services,” whether manufacturers can sustain the massive computing and token costs long-term remains uncertain.
“Manufacturers might adopt a ‘buy a phone, get free computing power’ model,” predicted an industry insider. “For example, include a certain amount of free tokens with each purchase for light daily tasks like writing reports or booking tickets. For more complex, high-consumption tasks like video generation, they might charge separately based on difficulty or let users pay for excess tokens.”