Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Under the "Crayfish Farming" craze, how to safeguard the safety bottom line?
“Recently in the office, half the people are talking about ‘raising lobsters,’ and the other half are learning how to ‘raise lobsters.’” On March 13, Li Yao, a product manager at a certain internet company in Beijing, told reporters that the colleagues’ discussion of “raising lobsters” is not about real-life aquaculture but refers to the personalized training and deployment of the recently released open-source AI agent framework OpenClaw.
Because the icon is a red lobster, OpenClaw is nicknamed “Lobster.” Unlike ordinary AI, it can autonomously perform complex tasks such as file management, email sending and receiving, and data processing by integrating communication software and large language models on the user’s computer.
Why has this “lobster” sparked a nationwide craze? What are the security risks of “raising lobsters”? A reporter from the Workers’ Daily conducted an investigation.
Multiple regions introduce support policies related to “lobsters”
As the first AI agent to attract nationwide attention this year, what makes “lobster” so special?
“Ordinary AI’s core mode is conversational; after asking questions, users only get answers or operational steps. ‘Lobster’ is a typical ‘action-oriented’ AI. Users just need to specify the task goal, and it can directly operate various tools to complete the entire process,” said tech blogger “Follow Aliang to Learn AI,” who previously worked at a leading internet company. For example, if asked to organize important emails, it will automatically open the mailbox, filter content, and draft replies—without any manual input from the user.
Yu Jingwen, an AI engineer at Peking University’s AIIT Digital Creativity Laboratory, said: “Unlike large language models like ChatGPT, OpenClaw is not just a simple chat robot. It is a ‘digital employee’ capable of obtaining local operating system permissions, calling various tools, planning steps based on natural language instructions, and automatically executing complex tasks.”
“Previous large AI models, no matter how advanced, were limited to their respective fields and couldn’t achieve cross-domain collaboration. The core advantage of ‘lobster’ is that it breaks down barriers between different domain models, truly mobilizing the functions and value of all large models,” Yu explained. “Therefore, it can be said that OpenClaw is a disruptive innovation with bridging and connecting roles in the AI industry.”
Under the wave of enthusiasm, many places have actively responded. According to incomplete statistics, by March 12, multiple regions including Longgang District in Shenzhen, Wuxi High-tech Zone, Hefei High-tech Zone, Changshu City in Suzhou, Qixia High-tech Zone in Nanjing, and Xiaoshan District in Hangzhou had introduced support policies related to “lobsters.” Some local governments also launched free deployment services for the public called “Little Lobster.”
“Lobster” hides multiple security risks
Amid the nationwide enthusiasm for “raising lobsters,” the experiences of early adopters have not all been positive.
“Overall, it doesn’t seem as magical as online promotions suggest,” Li Yao admitted. Using “lobster” effectively requires granting control permissions for various applications on the computer. Out of caution, he didn’t enable too many permissions, so he felt the effect was limited.
Yu Jingwen analyzed that the core capability of OpenClaw lies in controlling and operating various applications and tools. To achieve this, users must grant it extensive application permissions, including email, office software, and platform backends. “It’s like asking someone to clean your house—you have to give them keys to all rooms,” she said. Correspondingly, full application permissions may pose data leakage risks for individuals and enterprises deploying “lobster.”
On March 10, the National Internet Emergency Center issued a “Risk Warning on the Security of OpenClaw” (hereinafter referred to as the “Warning”), clearly stating that to realize the “autonomous task execution” capability, the application is granted high system permissions. However, its default security configuration is extremely fragile, and once attackers find a breach, they can easily gain full control of the system.
The “Warning” shows that, so far, multiple high- and critical-severity vulnerabilities have been publicly exposed in OpenClaw. If maliciously exploited by cyberattackers, it could lead to system control and leakage of private and sensitive data. “For individuals, this could result in exposure of private data such as photos, documents, chat records, and payment accounts; for enterprises, it could lead to leaks of core business data and trade secrets, causing immeasurable losses,” Yu explained.
Zhou Zichuan, a lawyer at Handing United Law Firm in Beijing, said: “To complete user commands, ‘lobster’ may collect large amounts of data directly or indirectly across the internet without control permissions. The amount of data, access rights, and sensitivity often exceed user control, which could seriously constitute crimes such as illegal acquisition of computer information system data or infringing on citizens’ personal information.”
“Moreover, many users, seeking convenience, pay third parties to deploy ‘lobster,’ but these third parties may lack proper security measures, making devices vulnerable to data exposure. Attackers could remotely control devices through malicious code execution and steal sensitive information,” Zhou added.
In the midst of the craze, safeguarding security bottom line is essential
This year’s government work report proposed deepening the “AI+” initiative, promoting the rapid deployment of new-generation intelligent terminals and agents, and accelerating the commercialization and scale application of AI in key industries, fostering new business models and formats rooted in intelligence.
“The emergence of ‘lobster’ provides a new path for the practical application of AI agents. If used properly, it can effectively leverage existing AI agents and greatly improve the work efficiency of individuals and enterprises,” Yu said. She emphasized that promoting healthy development of “lobster” depends on balancing convenience and security.
“The new technological features presented by AI agents like ‘lobster’ make existing legal issues surrounding AI more complex,” Zhou said. For example, the most critical issue of ‘unclear rights and responsibilities’ arises when AI agents like ‘lobster’ perform actions based on user instructions and cause infringement or other illegal activities. How to attribute responsibility among developers, deployers, and users needs further legal clarification.
For AI product developers and service providers, Zhou suggested that they should actively fulfill compliance obligations under laws like the Personal Information Protection Law, such as ensuring data processing logs meet retention and traceability requirements. They should also strengthen legal awareness in operational details, such as improving AI permission management and prompt review, to reduce security risks.
The National Internet Emergency Center’s “Warning” also offers specific advice: relevant organizations and individual users should strictly isolate the deployment environment when using OpenClaw, use containerization and other techniques to limit excessive permissions, and strictly manage plugin sources while continuously monitoring patches and security updates.
“Before ‘raising lobsters,’ users should comprehensively evaluate its value and risks, determine whether they truly need it to solve problems, and then decide whether to deploy,” Yu reminded. “During use, it is also essential to implement data partitioning and privacy protection, and avoid blindly following trends that could expose personal privacy and data to risks.”