Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The financial industry restrains itself from "raising lobsters"
“Are you farming shrimp?” Recently, the entire internet has been crazy about OpenClaw “lobster.” From individual efficiency improvements to enterprise process automation, this open-source AI agent has almost swept all tech applications and even social scenarios, but not so much in the financial sector.
On March 10th, amid the nationwide “shrimp farming craze” and questions about whether to deploy OpenClaw, Beijing Business Daily reporters interviewed several banks, consumer finance companies, and payment institutions. Most responded, “It’s too hot; we need to observe and wait,” while some openly stated that OpenClaw is not suitable for finance, especially noting data security risks involved. Industry insiders believe that in this “shrimp farming” trend, internet banks and consumer finance companies have not followed the trend to deploy, and payment tech teams remain cautious, mainly due to concerns over funds, data, and information security.
Collective Caution
“The ‘shrimp farming’ craze has collectively ‘fizzled out’ in the financial industry.” “Because the financial sector demands strict confidentiality, this AI application could pose risks to data and information security,” a consumer finance industry insider expressed his concerns.
“While it has some value, in core consumer finance areas, it still faces multiple risks. For example, compliance-wise, open-source intelligent agents are hard to meet regulatory requirements for risk control and other core operations; security-wise, open-source agents could lead to information leaks,” another consumer finance professional added.
In summary, the main reason remains the strict regulation and high-risk nature of the financial industry.
For consumer finance companies, if AI agents autonomously handle customer credit approval, risk management, and loan disbursement, efficiency could double. But if over-lending, credit errors, or data leaks occur, who bears the responsibility? Who takes on the risks? This is the biggest concern—an inherent conflict between technological autonomy and the compliance and security requirements of the financial sector.
“This is a minefield,” many consumer finance practitioners stated. No one wants to risk data and security by rushing to try new technology. “But OpenClaw is so popular that it feels a bit overhyped. We need to observe and understand its value more thoroughly.” Some also said that in the short term, the financial industry remains cautious, but layered penetration is possible.
Payment institutions are even more directly concerned. Every transaction involves funds security, leaving no room for “black box” algorithms. YeePay co-founder Yu Chen told Beijing Business Daily that the surge in open-source AI agents driven by OpenClaw represents a shift from dialogue AI to autonomous execution. The direction is promising, but the company remains cautious, observing the open-source framework carefully. Autonomous execution, permission openness, and compliance/security boundaries are inherently conflicting; the financial industry must prioritize security and controllability.
Regarding banks, frontline staff said, “Currently, not many in our bank are using OpenClaw. We see it as a high-permission AI software that can authorize operations on computers and execute commands directly. We frontline staff don’t use this kind of function; it’s mostly the tech department testing on a small scale.”
A bank business department head stated frankly that such open-source products require remote control of PCs via mobile devices during use. Even if they claim information isolation, banks remain highly cautious and generally do not use them directly.
Lower Compatibility
From a financial industry perspective, Yu Chen believes that the greatest value of open-source intelligent agents lies in automating processes and improving efficiency—freeing humans from repetitive tasks, reducing costs, and increasing productivity. However, there are risks: unexplained, uncontrollable decisions made by AI, data security issues, and overreach, which could directly violate compliance boundaries.
“Personally, I think it’s okay for casual use or internal work, but applying it to business processes is risky—such as data security and fund safety,” said another payment company insider. He pointed out that the existing risk control in payment operations is already robust, and blindly trying such AI agents could introduce hidden risks. “If something goes wrong with compatibility, it could cause transaction interruptions or incorrect fund settlements, with serious consequences.”
A tech staff member from a local rural commercial bank said, “For tech development, safety and compliance are always the top priorities.” Currently, their main concerns about deploying open-source projects are twofold: first, data security risks—open-source code can have vulnerabilities and backdoors that are hard to detect, risking data leaks; second, operational control risks—despite claims of information isolation, cross-device and cross-network control could be hijacked, screens recorded, or permissions overstepped, all of which threaten financial security. Banks will not risk using such tools.
Industry experts believe that, as a highly regulated and high-risk sector, the financial industry must exercise restraint. Shen Xiayi, deputy director of the Federal Reserve Securities Research Institute, explained that the unique nature of finance involves funds, customer privacy, and systemic risks. Any technological innovation must be based on manageable risks, unlike the internet sector’s rapid iteration and trial-and-error approach.
Shen Xiayi sees that currently, the compatibility of OpenClaw with finance remains low. Its core end-to-end automation conflicts with regulatory requirements—blurred responsibility boundaries, lack of algorithm explainability, and other issues make it difficult for banks, consumer finance, and payment institutions to meet regulatory red lines. Additionally, the high demands for data security and operational stability mean that some instances of OpenClaw have security vulnerabilities and third-party skill market risks. Given the complexity of financial operations, it can only be piloted in non-core scenarios, not in credit, risk control, or fund settlement areas. Overall adaptation will require long-term optimization.
Not Rejection, But Caution
It’s important to note that the financial sector’s “cooling off” does not mean rejection of AI, but rather a cautious approach to following trends blindly. A banking insider said that the wave of open-source AI agents is fundamentally a democratization of AI application paradigms. Large models have surpassed critical thresholds, and the market needs this wave to make users realize that AI is no longer just an auxiliary tool or a “consultant” providing suggestions, but a real executor capable of practical tasks.
This industry insider believes that AI application paradigms like OpenClaw are an inevitable future trend. For finance, the key is not “fear of use” or “unsuitability at this stage,” but how to adopt it carefully and gradually. The restraint shown by financial institutions stems more from respect for compliance and risk management than from technological rejection.
In the short term, the greatest value of open-source intelligent agents lies in significantly improving financial service efficiency and reducing operational costs, making financial services more inclusive. In the long run, these agents with autonomous task execution could bring new business models, create incremental value, and open new markets.
However, risks cannot be ignored. The same banking insider added that, in terms of compliance, security, and investment, financial institutions do have concerns. The biggest risks may be at the application level. The widespread adoption of AI reduces the barriers to execution, which is beneficial for value creation but also opens the door to malicious behaviors. Therefore, risk prevention must be strengthened in advance.
In fact, many institutions are quietly exploring customized AI applications. A bank insider said that their bank is currently focusing on risk post-loan management, customer service, and telemarketing. They are also exploring AI in credit approval, daily operations, and compliance/security. “For open-source AI agents to truly enter core financial scenarios, the first step is to solve technical security and compliance issues.” He emphasized that, in the near future, the initial work on responsibility attribution should still be human-led, with strict controls over key processes.
Zhaolian Consumer Finance reported that it has developed eight core intelligent agents covering consumer protection, compliance, asset management, operations, risk, decision-making, R&D, and traditional Chinese medicine, along with several office automation agents, to enhance various business areas.
Payment platform Lianlian Digital also mentioned that in recent years, they have promoted AI integration across risk control, operations, and customer service, and have adopted mainstream AI large models. Their proprietary platform offers comprehensive services including payments, fund transfers, global fund distribution, intelligent remittance processing, and risk management.
Gradual Integration
After the hype, industry experts believe that the financial sector will not see a “full-scale deployment” of OpenClaw but will enter a phase of cautious exploration and gradual integration. “Finance was one of the earliest sectors to apply AI because of the vast amount of transaction data,” Yu Chen explained. AI applications in finance mainly fall into two categories: one is basic applications, using AI as a safeguard—such as anti-money laundering; the other is advanced applications that generate more business opportunities.
Yu Chen sees broad future prospects for AI in finance: optimizing customer service, enhancing user experience, cross-selling with large models, discovering new sales leads, and deepening automation in risk control and compliance to truly serve business and user value.
“Currently, the digital transformation of banks, consumer finance, and payment institutions is mainly supportive, not aiming for full automation. This pragmatic approach aligns with the strong regulation and current technological and business realities,” said Wang Pengbo, chief analyst at Broadcom Consulting. He believes that if open-source AI agents are to enter core financial scenarios, they must first address explainability, traceability, and transparency—no black boxes—and meet strict regulatory and security standards. Responsibilities must be clearly defined, data must be compliant, and user privacy protected. Balancing open-source benefits with core institutional interests, maintaining human oversight, and avoiding irreversible risks are essential.
Small-Scenario Implementation
Considering industry trends and regulatory requirements, many bank insiders believe that in the next five to ten years, open-source tools will only be cautiously explored in banking, mainly in non-sensitive areas like marketing that do not involve customer privacy or core financial data. The focus will be on avoiding security risks in core operations.
“This cautious approach is not conservative but a rational response to the particular risks in finance. Banks can accumulate experience through pilot projects, verify value in controlled scenarios, and gradually expand,” said Du Tongtong, researcher at the Federal Reserve Securities Research Institute. She emphasized that financial institutions should adhere to prudent innovation, starting with non-core scenarios, and gradually explore core scenario adaptation.
Wang Pengbo also stated that the future of financial AI will focus on compliance, support decision-making, and small-scenario deployment—mainly in risk control, compliance automation, and operational efficiency—avoiding full process automation. They will prioritize low-risk, non-core functions like customer service and advertising writing, steering clear of security and compliance risks in core operations.
A banking insider added that, in the short term, financial institutions will not pursue full end-to-end automation but will emphasize a “Human in the Loop” approach, ensuring human experts retain final decision-making authority.
Furthermore, multi-agent collaboration combined with human supervision will be the future trend. Instead of fully autonomous single agents, a hybrid “multi-agent + human oversight” architecture will be built to handle complex financial scenarios.
Additionally, establishing comprehensive AI governance systems is crucial. Financial institutions will develop systematic mechanisms including AI asset inventories, risk importance assessments, and full lifecycle management to ensure AI applications remain safe and compliant.
In summary, many banking insiders believe that only when personal data protection is absolutely secure, and technological implementation is fully controllable with manageable risks, will banks cautiously explore open-source tools.
Some also mention that exploring open-source tools must meet clear conditions: industry-specific standards for open-source application, security benchmarks, and clear responsibilities. The open-source ecosystem should develop mature, finance-grade solutions capable of real-time vulnerability monitoring, rapid fixes, supporting localization and core technology independence, ensuring stability and security.