Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
a16z: Is the AI agent economy lacking infrastructure? Five ways blockchain can help
AI agents are rapidly shifting from copilots to autonomous economic actors at a pace far exceeding the development speed of the infrastructure surrounding them.
Although agents can now execute tasks and conduct trades, they still lack standardized ways across different environments to prove who they are, what they are authorized to do, and how they get paid. Identity cannot migrate across environments, and by default they still cannot support programmable payments; collaboration remains isolated in silos.
Blockchain solves these issues at the infrastructure layer. Public ledgers provide auditable receipts for every transaction for anyone to review. Wallets give agents portable identities. Stablecoins become another settlement layer. These are not components from the future— they are available today, and they enable agents to operate permissionlessly as true economic actors.
1. Identity for non-humans
(Identity for non-humans)
The bottleneck in the agent economy is now identity, not intelligence.
In just the financial services industry, non-human identities—automated trading systems, risk engines, fraud models—already number about 100 times the count of human employees. As modern agent frameworks (tool-calling large language models, autonomous workflows, multi-agent orchestration) are deployed at scale, this ratio will continue to rise across industries.
However, these agents are still essentially in a “bank account-less” state. They can interact with financial systems, but they cannot do so in a portable, verifiable, or default-trusted manner. They lack standardized ways to prove their permissions, operate independently across platforms, or take responsibility for their actions.
What’s missing is a universal identity layer—essentially an agent’s SSL certificate—that can coordinate standards across platforms. While there are already some notable attempts, these approaches are fragmented: one side is vertically integrated, fiat-first stacks; the other side is crypto-native, open standards (such as x402 and emerging agent identity proposals); and there are also expansions of developer frameworks, such as MCP (Model Context Protocol), attempting to bridge application-layer identity.
At present, there is still no widely adopted, interoperable way for one agent to prove to another agent who it represents, what it is allowed to do, and how it gets paid.
This is the core idea behind KYA (Know Your Agent). Just as humans rely on credit history and KYC (Know Your Customer), agents will need cryptographically signed credentials linking them to their principals, permissions, constraints, and reputation. Blockchain provides a neutral coordination layer for this purpose: portable identities, programmable wallets, and verifiable proofs that can be parsed across chat applications, APIs, and marketplaces.
We are already seeing early practices emerge: on-chain agent registration, wallets native to agents that use USDC, ERC standards for “trust-minimized agents,” and developer toolkits that combine identity with embedded payments and fraud controls.
But before a universal identity standard appears, merchants will still keep agents outside the firewall.
2. Governing AI-run systems
(Governing AI-run systems)
When agents begin operating real systems, it introduces new questions about “who is truly in control.” Imagine a community or company coordinated by AI systems that manage key resources—whether allocating capital or managing supply chains. Even if people vote on policy changes, if the underlying AI layer is controlled by a single provider (which can push model updates, adjust constraints, or override decisions), then such power is still quite weak. The formal governance layer may be decentralized, but the operating layer remains centralized; whoever controls the models ultimately controls the outcomes.
When agents take on governance roles, they introduce a new dependency layer. In theory, this could make direct democracy more feasible: everyone could have an AI representative to help understand complex proposals, model trade-offs, and vote according to stated preferences. But this vision can only be realized if agents are truly accountable to the people they represent, can migrate across providers, and are technically constrained to follow human instructions. Otherwise, you end up with a system that looks democratic on the surface, but is actually dominated by opaque model behaviors that nobody truly controls.
If the current reality is that agents are built from a handful of foundational models, then we need to prove that the agents act in the interests of their users rather than the model companies. This very likely requires cryptographic guarantees at multiple levels: (1) exactly which training data, fine-tuning, or reinforcement learning sources the model instances come from; (2) the exact prompts and instructions that the specific agents follow; (3) their actual action records in the real world; (4) trusted assurances that once deployed, providers cannot change their instructions or retrain the models. Without these guarantees, agent governance devolves into governance dominated by those who control the model weights.
This is where cryptography is especially applicable. If collective decisions are recorded on-chain and executed automatically, AI systems can be required to comply with verified results. If agents possess cryptographic identities and transparent execution logs, people can check whether their representatives are respecting the boundaries. And if the AI layer is user-owned and portable—not locked into a single platform—then no company can unilaterally change the rules through model updates.
In the end, governing AI systems is an infrastructure challenge, not a policy challenge. Real authority depends on building strong, enforceable guarantees within the system itself.
3. Filling gaps in traditional payment systems for AI-native businesses
(Filling gaps in traditional payment systems for AI-native businesses)
AI agents start buying things—web scraping, browser sessions, image generation—and stablecoins are emerging as the alternative settlement layer for these transactions. Meanwhile, a new class of agent-focused marketplaces is forming. For example, Stripe and Tempo’s MPP marketplace aggregates more than 60 services specifically designed for AI agents. In its first week, it handled over 34,000 transactions with fees as low as 0.003 USD, with stablecoins as one of the default payment methods.
The difference lies in how these services are accessed. There are no checkout pages. Agents read schemas, send requests, pay, and receive information within a single transaction. They represent a new kind of “headless” merchant: just one server, a set of endpoints, and per-call pricing. No front-end—no storefront or sales team.
The payment method enabling all of this is already live. Coinbase’s x402 and MPP take different approaches, but both embed payments directly into HTTP requests. Visa is also expanding card payments in a similar direction, providing a CLI tool that lets developers pay from the terminal, while merchants receive stablecoins immediately on the backend.
The data here is still early-stage. After filtering out non-natural activity (such as bot traffic), x402 processes roughly 1.6 million USD per month in agent-driven payments, far below Bloomberg’s recent report of 24 million USD (citing x402.org data). But surrounding infrastructure is expanding quickly: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.
Developer tools are a major opportunity, as “vibe coding” expands the range of who can build software, and the market for solving new developer problems is growing too. Companies like Merit Systems are building AgentCash for this world—a CLI wallet and marketplace that can connect both MPP and x402. These products allow agents to use stablecoins from a single balance to purchase the data, tools, and capabilities they need. As a result, an agent on the sales team can enrich leads by calling a single endpoint to pull data from Apollo, Google Maps, and Whitepages, all without the user leaving the command line.
There are several reasons this agent-to-agent (A2A) commercial activity tends to shift toward crypto solutions (and emerging card solutions). First is underwriting: when a payment provider onboard a merchant, it bears that merchant’s risk. A headless merchant without a website or a legal entity is difficult for traditional processors to underwrite. Second, stablecoins are permissionless and programmable on open networks: any developer can enable a terminal to accept payments without integrating a payment processor or signing merchant agreements.
We have seen this pattern before. Every time the business model shifts, it creates a new class of merchants that existing systems initially struggle to serve. The companies building this infrastructure are not betting on 1.6 million USD per month—they’re betting on what that number becomes when agents become the default buyers.
4. Repricing trust in an agentic economy
(Repricing trust in an agentic economy)
For 300,000 years, cognition has been the constraint on human progress. Today, AI is pushing the marginal cost of execution toward zero. When scarce resources become abundant, constraints shift. When intelligence becomes cheap, what becomes expensive? Verification.
In an agent economy, the real limit to scalability is our biological capacity to audit and hold machine decisions accountable. The throughput of agent economies already far exceeds human oversight capacity. Since supervision is expensive and failures are delivered late, markets are prone to intentionally ignore it. “Real-time human sync” is rapidly becoming physically impossible.
But deploying unverified agents introduces compounded risks. The system optimizes the intentions of “agents” relentlessly, while quietly deviating from human intentions, manufacturing an illusion of hollow productivity that conceals the debt incurred by large-scale AI adoption. To delegate our economic security to machines safely, trust can no longer rely on manual checks—trust must be hard-coded into the architecture itself.
When anyone can generate content for free, the most important thing is verifiable provenance—knowing where it came from and whether you can trust it. Blockchain, on-chain proofs, and decentralized digital identity systems are redefining the boundaries of economies that can be securely deployed. You no longer treat AI as a black box; instead, you gain a clear, auditable history.
As more AI agents begin transacting with each other, settlement and verification become intertwined. Systems for money movement—such as stablecoins and smart contracts—can also carry cryptographic receipts showing who did what, and who is responsible if something goes wrong.
Human comparative advantage moves upward: catching small errors, setting strategic direction, and taking responsibility when things go wrong. The sustainable advantage belongs to those who can verify outputs in cryptographic ways, vouch for them, and accept liability when failures occur.
Unverified scale growth will inevitably become a form of debt that accumulates over time.
5. Preserving user control
(Preserving user control)
For decades, new abstraction layers have been defining how users interact with technology. Programming languages abstract away machine code. Command-line interfaces gave way to graphical user interfaces, then to mobile apps and APIs. Each transition hides more underlying complexity while keeping users locked into the system.
In the agent world, users specify outcomes rather than actions, and the system determines how to achieve them. Agents not only abstract how tasks get done, but also who does them. Users set initial parameters and then step back, letting the system run on its own. The user’s role shifts from interaction to supervision; unless the user intervenes, the default state is “agent active.”
As users delegate more tasks to agents, new risks arise: ambiguous inputs may cause agents to act on incorrect assumptions without the user noticing; failures may go unreported with no clear diagnostic path; a single approval could trigger unexpected multi-step workflows that nobody anticipated.
This is where cryptography comes in. Cryptography has long been committed to minimizing blind trust. As users give software more decisions, agent systems sharpen this problem even further and raise the threshold for designing rigor—by setting clearer boundaries, increasing visibility, and enforcing stronger guarantees on what the system can do.
A new generation of crypto-native tools is emerging. Scoped delegation frameworks—such as MetaMask’s Delegation Toolkit, Coinbase’s AgentKit and agent wallets, and Merit Systems’ AgentCash—allow users to define what agents can and cannot do at the smart contract layer. Intent-based architectures, such as NEAR Intents (which has processed over 15 billion USD in cumulative DEX transaction volume since Q4 2024), let users set expected outcomes—such as “bridge tokens and stake”—without specifying how to implement them.
AI makes scale cheap and accessible, but trust remains difficult. And crypto restores trust at scale.
The internet infrastructure for agents to participate directly in the economy is under construction. The open question is whether it will be designed for maximum transparency, accountability, and user control, or whether it will be layered on top of systems that were never intended to support non-human behavior.