Author: Ariel, Crypto City
AI agents with lobster logos, like OpenClaw, have become a global sensation. The Chinese government has issued the “Lobster Safety Manual” to warn about related risks. Legislator Lai Shih-bao observed that some domestic brokerages and stock market analysts are gradually adopting OpenClaw. If an AI agent makes a trading mistake, who is responsible? He also called on the Financial Supervisory Commission (FSC) to develop a dedicated lobster safety manual for the financial industry.
FSC’s Peng Jinlong responded that he has not used OpenClaw himself but has noticed this phenomenon is quite common. The FSC’s relevant departments are already researching future measures and monitoring the usage of such tools by financial institutions.
Peng Jinlong pointed out that the FSC previously issued the “Guidelines for Financial Industry Using AI.” The financial sector already has certain cybersecurity and internal control mechanisms for new technologies. If these tools threaten operational security, they will be reviewed and a related safety manual will be developed.
Six Key Points of the FSC’s Guidelines for AI Use in the Financial Industry
Source: Gemini AI Generation | FSC’s Six Key Points for AI Use in the Financial Industry (AI Diagram)
The Financial Data Department Responds to Lobster Security Concerns
Regarding cybersecurity concerns brought by AI agents, Department Head Lin Yi-ching stated during a recent inquiry that Taiwan is actively promoting sovereign AI to address security issues and enhance technological independence. This ensures that AI models used by the government and critical infrastructure operate domestically and are legally regulated. He also mentioned Nvidia’s recent launch of the NemoClaw platform, which focuses on strengthening AI agent cybersecurity.
To build Taiwan-specific computing power, the Department has received applications from Foxconn to invest in computing centers and is discussing with the Ministry of Finance and FSC to open up insurance sector funding, aiming to reduce reliance on overseas AI models.
What does China’s Lobster Safety Manual say?
OpenClaw was created by Austrian engineer Peter Steinberger and initially gained popularity in tech circles. Installing and using OpenClaw is called “raising lobsters.” Recently, this trend has spread to China, with ordinary people also starting to raise lobsters, and even paid services emerging where engineers install it for users.
In response, China’s Ministry of State Security issued the Lobster Safety Manual, warning of inherent risks such as host takeover, data theft, and speech manipulation. They urged users to strictly limit operational scope and run the software in isolated environments like dedicated virtual machines or sandboxes.
Where there’s lobster raising, there’s lobster uninstalling
Although the lobster-raising craze is booming, Chinese communities are now shifting from frantic installation to paid uninstallation. BBC reports that experts attribute this decline to the high technical barriers and operational costs of OpenClaw, as each action requires funds to invoke AI models.
Security risks are also a major concern. China’s National Internet Emergency Center pointed out that improper use could lead to theft of sensitive information like photos and payment accounts, or even AI misinterpreting commands and deleting data mistakenly. For ordinary users, the software’s actual utility is limited, which has triggered this wave of uninstallation.