Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Artemis: The New Machine Economy Era of 2030 — Who Will Be the Ultimate Winner
Author: Lucas Shin, Source: Artemis, Compiled by: Shaw Golden Finance
Overview
By 2030, intelligent agents (AI Agents) will become the main way people use the internet.
A brand-new agentic network requires new payment rails, a new monetary system, and new base components.
Value will concentrate across three major layers: the interface layer, the main entity that controls user interactions; the payment layer, the main entity that intervenes in the flow of funds; and the compute & custody layer, the main entity that operates the underlying infrastructure
Long-tail agent business activities will run on open protocols.
Let’s first paint a scene.
The year is 2030. You’re 24, living in Burlington, Vermont, and you love investing—mostly in U.S. stocks, and you also participate in some crypto and prediction market trading on Kalshi. Two months ago, you started a fintech consulting company part-time.
Some days, like today, the opening always comes out of nowhere.
Wrr—
The ringtone jolts you awake, like a bucket of cold water splashed on your face. It’s your private intelligent agent, Nexus, sending you a message:
What exactly happened while you were asleep?
Nexus sent out a research sub-agent, costing $0.24, pulling information overnight from 40 different data providers, comparing Walmart’s latest earnings call content with satellite imagery of parking lots at stores across the U.S., and updating your investment logic. When the satellite data shows Walmart’s customer traffic is declining, your portfolio agent cross-referenced Kalshi’s earnings sentiment market, confirmed the bearish signal, and finished the de-risking before you woke up. Four years ago, this kind of trading strategy was still exclusive to Citadel Securities (Citadel) and a small number of quant funds—they had to pay millions of dollars for subscriptions to satellite imagery. Even a Bloomberg terminal costing $30,000 a year couldn’t cover all the information—you still had to separately subscribe to satellite imagery and alternative data, and spend hours integrating and analyzing it. But now, a 24-year-old in Vermont can get the same information advantage as quant analysts at Citadel Securities at a cost of less than a cup of coffee.
Nexus’s sales sub-agent filtered 200 leads that match your target customer profile—fintech companies in the U.S. Southeast Series B and later that haven’t used data service providers yet—and completed information enrichment at a cost of $0.002 per lead, calling APIs developed by another agent and listed on an open market. It identified the 3 leads with the strongest intent signals, then reached out to their calendar agents to negotiate meeting times. Before each meeting, it pulled the prospect’s graduation school, shared connections, company news, and funding history, and assembled a one-page brief for you, pinned directly to the meeting notes. Just the lead enrichment alone—if you do it via SaaS subscription, each account would cost $200 per month.
Nexus’s operations sub-agent ran comparison tests between your consulting website and 6 server providers: Vercel, Render, Railway, Fly.io, Netlify, and Cloudflare. It called each provider’s trial API at extremely low cost, deployed a test environment, and measured latency, availability, and throughput. In the end, Railway achieved equivalent performance at one-third the cost. Nexus negotiated the monthly fee through Railway’s pricing agent, built a website mirror on the new server, and completed the full suite of tests to ensure everything runs correctly. If there were no agents, this would take at least a week: searching the web, getting quotes, and dealing with anxiety-inducing manual migration. You just need to confirm the execution with Nexus.
Your agents did all of that for just $0.67.
Now, multiply this scenario by every knowledge worker worldwide, every company, and every running intelligent agent.
Wrr—
Just like last week, you top up $5 using the credit card linked via Apple Pay, and then continue brushing your teeth. At the bottom layer, those $5 are converted from your credit card into stablecoins—but you can’t see your wallet at all. No need to think about deposits, and you don’t have to touch the blockchain at all.
This is a glimpse of machine economics—a brand-new business scenario where AI agents keep paying for things humans have never had to pay for, with transaction scale and speed far beyond the range of human commerce. You can imagine billions of transactions happening every day.
But today’s internet isn’t ready to support all of this.
Right now, the internet is designed for humans. It filters non-human behavior through rate limits, CAPTCHAs, and API keys, and it monetizes human users via advertising. However, as large numbers of autonomous agents emerge, this business model will completely fail.
Traffic surges, but effective attention collapses.
Internet servers subsidized long-term by ad revenue will face an order-of-magnitude increase in requests—requests that will never be influenced by ads.
Agent payments naturally solve this. Small payments will become the key to access.
Pay to crawl, pay to access, pay to use.
Companies that build the infrastructure ultimately adopted widely by agents will capture the largest pool of new economic activity that our generation will ever see. Existing giants are already fighting to grab positions, but machine economics will also give birth to its own new giants. In the last wave of a new internet, we saw the rise of Google, Amazon, Facebook, PayPal, and Salesforce.
The era of agentic internet is about to arrive.
Market size outlook
By 2030, most network interactions will no longer be done through browsers. Our intelligent agents will handle browsing, testing, negotiation, forming sub-agent teams, and executing transactions on your behalf. Every task they complete will generate a chain of small payments. These per-use costs may look like new expenses, but they’re actually replacing much more costly tools and human labor. The more advanced the tools, the better the agents perform, and we’ll grant them higher autonomy.
Demand and adoption speed
Let’s make a rough estimate.
In the earlier example, Joe’s agents completed hundreds of transactions for just $0.67. If we scale that to a mid-sized company with 500 people—each employee equipped with a personal agent, plus hundreds of shared agents for departments like sales, finance, legal, and operations—on a typical day they can easily generate 100,000 transactions initiated by agents.
There are more than 1 billion knowledge workers worldwide. 88% already use AI at work, so demand-side volume is huge and continuously growing. But today, most of this usage is limited to basic tasks like web search, document summarization, or writing emails. The shift to fully agentic automation hasn’t arrived yet—but once it starts, the speed will be extremely fast.
Instagram took 30 months to reach 100 million users, TikTok took 9 months, and ChatGPT only took 2 months (Reuters / UBS data). One reason for ChatGPT’s rapid adoption is that the conversational interface is already familiar to people, and there’s no need to learn new software or change usage habits—you just describe what you need, and the agent figures out how to get it done.
The only barrier is trust, and trust can be built far faster than people expect. At present, Claude Code has contributed 4% of all public code commits on GitHub (over 135,000 times per day). At the current growth rate, by the end of 2026 it should surpass 20%. That means a 42,896x increase in 13 months. Developers only took a bit over a year to shift from skepticism to handing production-grade code over to AI at scale.
As models become smarter, interfaces become even more streamlined, and more technical complexity gets abstracted away and hidden, I believe the adoption speed of intelligent agents will accelerate further.
By 2030, even if only 60% of knowledge workers use agents, average daily spend will be $3 to $5 (this is conservative—after all, Joe completed three tasks before breakfast for only $0.67). Just at the personal side, the agent transaction volume will reach $800 billion to $1.4 trillion per year.
Enterprise market
Dragonfly’s Robbie Peterson points out in an article that commercial intelligent agents are a reasonable evolution path for the SaaS model. I fully agree. They are no longer just augmenting workflows—they will completely replace existing processes. Just as today more than 95% of software spending comes from enterprises and government institutions, the usage and spending scale of intelligent agents on the enterprise side will very likely far exceed the personal market.
We’re already witnessing this change. Klarna replaced Salesforce with its internal AI system, saving about $2 million. ZoomInfo built AI agents to replace its transaction approval department, saving over $1 million per year. These are just early cases where single workflows were agentified, cutting costs by millions. Every enterprise has hundreds of such workflows in sales, finance, legal, operations, and R&D. Once intelligent agents are deployed across the entire company, the scale of related spending will be staggering.
Anyone can become a merchant
As code agents dramatically reduce development costs, the entry barrier for internet merchants is approaching zero. A wedding planner skilled at venue selection can package and sell the best workflows. An independent developer in Lagos can build a vertical-domain API and start earning revenue from agents around the world within a few hours. You just need to have domain expertise and use prompts to generate an API interface to start collecting payments.
But what happens if agents start selling services to other agents?
Let’s assume Joe wants to enter a new domain mentioned earlier: a mid-sized healthcare company in the U.S. Midwest with outdated payment infrastructure. If his agent starts from zero, inference completion, token costs will accumulate quickly:
Filter 200 companies that match a specific persona (inference + API calls): about 500,000 tokens
Enrich every lead’s information (tech stack, funding, hiring data): 200 leads × about 5,000 tokens = 1,000,000 tokens
Lock in decision-makers for core customers: about 200,000 tokens
Score intent signals (hiring cadence, contract cycle): about 300,000 tokens
Research each decision-maker’s background: 20 leads × about 10,000 tokens = 200,000 tokens
Write personalized outreach copy: 20 leads × about 3,000 tokens = 60,000 tokens
Total: about 2.3 million tokens. Based on the cost estimate using a frontier model like Opus 4.6, it comes out to $8 to $15.
Wait—didn’t Joe’s sales sub-agent do a similar process earlier for just a few cents?
Yes. Because most steps were already solved by other agents. Lead enrichment, intent scoring, and scheduling all have packaged interfaces on the open market, priced at only fractions of a cent.
This model creates a completely new business scenario. On the supply side, growth becomes bidirectional: humans build services, and agents build services too. A high-token-cost problem solved by one agent can turn into a cheap tool that all subsequent agents can use. In such a world, agents can codify their experience into workflows and sell them to other agents, subsidizing their own operating costs.
Every paradigm shift creates new merchants. Shopify empowered ecommerce sellers, Stripe empowered online businesses, and the machine economy will empower ad-hoc developers and autonomous intelligent agents.
A reality check
So how far are we from truly commercial agent-to-agent payment transactions?
My Artemis team has been tracking the progress of two mainstream agent payment protocol lines: Coinbase’s open x402 protocol, and the machine payment protocol (MPP) jointly launched by Stripe and Tempo. In simple terms, the goals of these two types of protocols are identical: enabling users or agents to pay for any network service (e.g., data, web crawling, model inference, or other API services) in a single network request—removing the need for the hassle of account registration, API keys, billing settlement, and so on.
At present, it’s still early.
The x402 protocol transaction volume at the end of 2025 was artificially inflated by memecoin hype and leaderboard farming behavior. The chart above shows the “real” transaction activity after being adjusted by a proprietary algorithm that filters out fake transactions. Once you strip away the noise from fake transactions and memecoin hype, it becomes clear that agent economics hasn’t truly arrived yet. Most activity today is still developers testing paid APIs and AI tools—not actual agent economies running.
Before this model truly takes off, two major core problems must be solved:
The supply side isn’t formed yet: there’s a severe shortage of practical API interfaces that can generate real willingness to pay from agents.
There’s no mature discovery and aggregation layer: even if high-value interfaces exist, agents currently lack a reliable way to discover them.
Because the entire ecosystem is still developing, using transaction volume as the core metric is too early. A more reasonable observation metric is supply-side growth—i.e., the number of merchants providing services to agents. We refer to these merchants as service providers.
The chart above shows the cumulative change over time in the number of qualifying service providers (sellers). A qualifying seller must meet the standards: completing more than two “real” transactions and having at least two independent buyers. In October of last year, that number was still below 100; now it’s already over 4,000. I expect this growth rate will accelerate further, driven by three major trends:
Artificial intelligence is lowering the barrier to creating digital products (as described earlier), meaning more people and AI agents will become merchants.
New services will be designed with agents as the first-class priority. Agents are becoming the core customers, so the form of products built for them will look fundamentally different: using APIs instead of web pages, using instant access instead of registration flows, and using pay-as-you-go instead of subscription models.
Existing service providers will be forced to pivot. As more users interact through AI interfaces rather than manually browsing webpages, the ad-dependent business model will completely fail—because there will be no monetizable human attention. Enterprises will have no choice but to charge directly for content and services.
These forces will create a positive flywheel that amplifies both supply and demand ends, ultimately igniting the entire agent economy.
Industry landscape
The architecture of the agent transaction ecosystem is rapidly taking shape. Many startups are popping up like mushrooms after rain, each focused on filling one of the missing pieces in this architecture. At the same time, growth-stage companies in fintech and software services (SaaS) are also transitioning to native agent transaction models. Over the past twelve months, almost all mainstream payment giants and AI labs have launched or announced protocols related to agent transactions.
We’ve mapped more than 170 companies across five major layers: interaction interfaces, intelligent agents, account systems, payment infrastructure, and AI engines. Here we condense it to about 80 core institutions:
We break it down layer by layer, from top to bottom.
Interface layer
The interface layer is closest to the user. It’s responsible for routing the user’s intent (needs) to the required tools or services (supply). Whoever defines how intelligent agents discover, evaluate, and select services will have enormous control over every layer below. We will focus on the two most important categories within this layer:
User interfaces
This is the entry point through which most people directly interact with intelligent agents. Apple, Google, OpenAI, Anthropic, xAI, and Perplexity are all building these kinds of interaction interfaces. Their forms are quickly moving beyond simple chat. New formats keep emerging—voice assistants, desktop assistants, embedded copilots, browser agents—closer to real user use cases. The platform that becomes the default AI interface for users will be the starting point for all transactions initiated by agents. The winner of this race will gain extra substantial advantages.
AI labs have already crawled and trained on the entirety of the internet’s data. Now, the remaining best training data is human-guided feedback. Every time you accept or reject a response, make a correction, or provide preference information to Claude or ChatGPT, the interaction interface captures that data for selling or model training. Whoever controls the interface effectively controls the feedback loop that optimizes user experience and the model itself. This is also why Anthropic launched Claude Code, why Google acquired Windsurf, and why OpenAI tried to acquire Cursor. Once your agent accumulates context about your preferences, workflows, and commonly used tools, your migration cost becomes extremely high.
Service discovery
When Joe’s agents need a lead enrichment interface or a satellite data provider, how do they find the right service? This might be the biggest unresolved challenge in the entire ecosystem architecture. Most current solutions are either hard-coded tool lists or curated service marketplaces. Major platforms are building their own systems: OpenAI and Stripe have launched ACP, Google and Shopify have launched UCP, and Visa has launched TAP. Fundamentally, these are merchant directories that only work if both the platform and the merchants proactively integrate. This model performs well in typical scenarios, but as the barriers to creating and selling digital services drop dramatically, a lot of niche, highly customized applications will emerge—and curated models can’t satisfy these long-tail needs.
Companies represented by Coinbase, Merit Systems, Orthogonal, and Sapiom are building open alternatives. They build aggregators and underlying infrastructure so that agents can autonomously search and pay for services at runtime, without pre-integration or business partnerships. As the supply side (i.e., network resources) grows exponentially, the difficulty of solving this problem becomes extremely high. But whoever can crack ordering and recommendation systems—matching agents to the right services at the right time—will gain enormous industry influence.
In the end, will agent transactions move toward curated closed modes or open ecosystem modes, and how this landscape will determine value distribution—this is one of the most central debates in the field. We’ll dive deeper into this topic later.
Intelligent agents and account layer
To get tasks done, intelligent agents alone are far from enough. Joe’s sales sub-agent completed the full workflow of filtering 200 leads, information enrichment, and booking three meetings, while Joe didn’t need to configure any tools, manage API keys, or approve each step individually. Most of the infrastructure that makes all of this possible is invisible to the end user. But without these facilities, agents are just large language models without execution capability. Below is an overview of the core building blocks required to make this happen:
Tools and standards
These protocols and frameworks give intelligent agents the ability to interact with the outside world. MCP (Model Context Protocol, initiated by Anthropic and now managed by the Linux Foundation) enables agents to connect to external data and tools: calling APIs they’ve never seen before, reading databases, or calling a service on the spot. A2A (proposed by Google) defines how agents built on different platforms can discover and collaborate with each other. LangChain, the frameworks released by Nvidia, and Cloudflare provide developers with foundational modules to build and deploy agents on top of these protocols. OpenClaw, recently acquired by OpenAI, integrates context management and tool calling into a single localized-priority framework, greatly reducing developers’ difficulty in building agents that can autonomously discover and pay for services.
The core question in this area is: Will these standards ultimately unify, or will they fragment? Can the commercial frameworks built on top of them capture value before tools become commoditized?
Identity authentication
After agents can communicate with each other, trust must be established. Before an agent can execute transactions or sell services, it must prove its authorized subject and operating permissions, and retain an activity record that other agents can verify.
There are currently many technical paths, including: biometric identity verification (Worldcoin, Civic), on-chain agent reputation systems (ERC-8004), and verifiable credentials (Dock, Reclaim).
This design space is wide, and the risk is extremely high: how much money can your agent spend before you approve it? Can it sign contracts on your behalf? Can it delegate permissions to sub-agents? These rules and security boundaries will most likely be finalized at the account layer.
Wallets
Obviously, for agents to make payments they must have a wallet. Coinbase, Safe, MetaMask, Phantom, MoonPay, Privy, and many other vendors are building in this space, providing features including programmatic access and creation, permission delegation, per-transaction spending limits, payment receiving whitelists, and cross-chain operability without requiring users to manually confirm every action. This is one of the most fiercely competitive tracks in the entire ecosystem—and it also raises a key question: where exactly is an enterprise’s moat? Will this area eventually become commoditized?
Payment layer
The payment layer sits deeper in the architecture and should feel invisible to end users, but every dollar in machine economics will flow through it. When Joe’s agents pay $0.24 overnight to retrieve data from 40 providers, he doesn’t have to choose the card network, currency, or settlement blockchain for every transaction.
The core challenge is that traditional payment rails are designed for humans clicking a “buy” button, not for agent API calls that happen thousands of times per minute and involve amounts less than a cent. Card network transactions have an approximately fixed cost of $0.03–$0.04 per transaction, plus 2.3%–2.9% fees. That works for a $400 hotel order, but it completely cannot adapt to these new multi-step agent transactions.
This has given rise to a new batch of protocols and monetary systems specifically for agent transactions, while traditional giants are also upgrading existing infrastructure to support these needs.
Key points are as follows:
Payment rails
These protocols and standards define how intelligent agents initiate, route, and complete payment settlement. At present, two main technical routes have formed:
x402 (Coinbase/Cloudflare) and MPP (Stripe/Tempo) are designed for machine-native transactions: agents call interfaces, get quotes, sign payments, and receive data—all completed within a single HTTP request. Settlement uses stablecoins, with per-transaction costs only a few fractions of a cent.
ACP (OpenAI/Stripe), AP2 (Google/PayPal), and Visa’s TAP take another approach: modifying existing card payment infrastructure to adapt to agent scenarios. These solutions are more suitable for high-value transactions; compared with settlement speed and cost, buyer protections and merchant acceptance coverage are more important.
Stablecoins and settlement
Intelligent agents need a programmable, fast, low-cost, global currency. Stablecoins fully meet these requirements, so they become the natural choice for x402 and MPP transactions. At the same time, card payment rails can still provide buyer protection, and merchants’ spending habits are mature—this remains important for high-value transactions. Underlying public chains (e.g., Base, Solana, Tempo) introduce another critical question: which chains can support the processing throughput, transaction finality, and cost structure required by agent-scale transactions?
Service providers
These institutions act as intermediaries between intelligent agents and merchants, handling complex compliance reviews, merchant onboarding, permission authentication, and so on. Coinbase, Stripe, and PayPal are expanding existing ecosystems to support agent transactions. They bet that their merchant networks and compliance infrastructure can form a competitive advantage. Others like Sponge and Sapiom address the cold-start problem from the emerging merchant side, enabling any API-based business to easily start accepting agent payments. As payment rails, protocols, and merchant counts continue to grow, intermediaries may become the critical connecting link that prevents the entire system from fragmenting.
AI engine layer
This layer doesn’t need much introduction. All agent interactions, reasoning steps, and tool calls are driven by it. But the business model changes in this layer happen far faster than in other parts of the architecture, and where the value ultimately flows isn’t as clear as it looks on the surface. We focus on two categories:
Compute and custody
Every time Joe’s intelligent agents reason through a task, call tools, or create sub-agents, they consume compute resources. But model inference is only part of it. With the explosive growth of low-code / ad-hoc development applications and agent-built services, a large number of new interfaces keep emerging, and they all require hosting substrates. As of May 2025, the number of accessible web pages grew by 45% within just two years; and with code agents making it extremely easy to launch new services, this growth rate will accelerate further. This means compute demand is growing in parallel on both ends: on one side, more agents process more tasks; on the other, more services launch continuously to meet their needs.
Ultra-large cloud providers (AWS, Google Cloud, Nvidia) are obvious core players. AWS and Google Cloud also keep simplifying the deployment process for agent backends and APIs on their infrastructure. Cloudflare focuses on edge computing, providing low-latency serverless compute for agent-facing services. Meanwhile, decentralized compute platforms like Akash, Bittensor, Nous, and others satisfy excess compute demand by aggregating global GPU resources and selling them at extremely low prices.
Foundation models
Foundation models are the “brains” of the whole system. Anthropic, OpenAI, Google, and Meta, as frontier labs, keep expanding the boundaries of what intelligent agents can do, while the cost of running these models is dropping quickly. At the end of 2022, the cost of running GPT4-level models was about $20 per million tokens. By early 2026, models with equivalent performance cost about $0.05 per million tokens—a 600x reduction in just a bit over three years. Hardware upgrades, competition among vendors, and optimization techniques like prompt caching and batch processing all keep pushing down inference costs. Meanwhile, as inference logic gets distilled into smaller open-weight models and the operating cost becomes extremely low, the cost of building intelligence also drops significantly. In some benchmarks, the performance gap between open-weight models and closed-weight models has narrowed to only 1.7%.
This is a major boon for machine economics.
Cheaper intelligence means cheaper agents. That lets a 24-year-old independent founder in Vermont easily afford operating costs—thereby further boosting transaction activity across every layer of the ecosystem. If foundation models get trapped in price competition like cloud service providers today, then value might ultimately concentrate in the upstream and downstream layers of the model stack, rather than in the models themselves.
Who will be the winner?
By 2030, most of your digital interactions won’t need browsers, search engines, or app stores anymore. You just state your requirements, and intelligent agents handle everything end-to-end: finding the right services, negotiating terms, completing payments, and delivering the final result. The internet will look fundamentally different.
You can think of it as a search engine optimization era for agents. There will be more and more API interfaces, and fewer and fewer interaction interfaces for humans.
In a world like this, who captures the value?
Sam Ragsdale of Merit Systems wrote an article comparing today’s agent transaction ecosystem with the early internet. He argues that the curated agent service marketplaces built by major platforms (ACP, UCP, TAP) follow the old path of 1990s AOL—polished user experience, closed systems, but with a core limitation: all service providers must be manually screened and reviewed. Open protocols like x402 and MPP are more rough around the edges, but they are permissionless: anyone can build interfaces, and without business teams or legal review, they can earn revenue by selling through agents. In the 1990s, closed garden-style product experiences were better; but the open internet has infinite possibilities.
In the end, the open internet wins.
The same logic is repeating. ACP, UCP, and TAP will connect with top AI labs and serve mainstream scenarios well, but they’re limited to agents that operate within pre-screened service directories, only completing tasks the platform predefines. Agents that can connect to the full open protocol ecosystem have much broader capability boundaries.
Remember: today the most vibrant part of the internet comes from the long-tail web traffic created by the HTTP protocol.
We must humbly admit that we can’t imagine the full picture of an open-agent internet. Just like in 1995 no one could predict ride-hailing or social media would appear. Once we provide agents with the tools they need, we also can’t foresee what they will create and which services people will pay for.
As we discussed earlier, foundation models are rapidly converging toward commoditization. Value may shift toward other layers in the technical stack. Developer tools, wallets, and identity infrastructure are crucial, but as standards unify, these areas are also likely to become commoditized. Therefore, I believe value will concentrate in three major areas: interaction interfaces, payments, and compute.
Interaction interfaces
Interaction interfaces determine spending limits, approval flows, and trust delegation mechanisms. A platform that can craft the most personalized experience for users will carry the most transaction traffic.
Apple is the most underestimated player in this area. Its devices are deeply embedded in people’s daily lives, so user migration costs are extremely high. If Siri evolves into a mature agent interaction entry point, Apple doesn’t need to build the very top models; it can control the starting point for billions of transactions. They just need to maintain the best interaction entry point.
Google’s transition is even harder. Moving from humans manually browsing to agentic intelligent filtering will erode its core advertising revenue. But Google has advantages no other company can match: it has accumulated decades of personal data across search, email, calendars, maps, and documents. And there’s also the enterprise-side migration cost—Google Workspace is embedded in millions of businesses, with employees’ email, files, and workflows running on Google infrastructure. If there’s any company that can build the most personalized agents for both consumers and enterprises, it would be Google. The question is whether it can monetize agent services as efficiently as it monetizes search traffic.
Merit Systems is my dark horse. They are building service discovery infrastructure for the open agent economy (AgentCash, x402 scanning, MPP scanning) and also developing consumer-side interfaces (Poncho). Their core logic is: whoever controls the service discovery channel for agents and inserts into the money flow will occupy Google’s position in the early internet. It’s an ambitious bet, but if open agent transactions defeat curated closed modes, Merit will become the most advantageous aggregation layer. For now it’s still early, just like when Google competed against an AOL closed ecosystem valued at billions, then later worth about $350 billion in today’s market cap.
Payments
Who controls the flow of funds captures a share of every transaction. I’m most confident about this layer’s outlook because its scale will grow in lockstep with transaction volume.
Stripe and Tempo have the strongest advantages in machine-native payments. Stripe already has a mature developer ecosystem and a large merchant network. Tempo, meanwhile, has features like streaming payments, ~500ms transaction finality, streaming payment rails, native support for cards and stablecoins, paying Gas fees in dollars (no token volatility risk), and server-side sponsored transactions—built specifically for the massive transaction volume of the machine economy. If MPP becomes the default machine-native payment rail, Stripe and Tempo will take a cut from every agent-initiated transaction.
Circle will grow in sync with the expansion of the agent economy. I firmly believe stablecoins will become the settlement layer of machine economics. Then Circle will earn a share from every dollar that comes from agent wallets via reserve income. USDC is the most widely accepted stablecoin across exchanges, wallets, public chains, and payment protocols. New developers will preferentially choose it, further deepening its ecosystem integration and making it harder for competitors to enter.
Visa will complete the adaptation. Remember when Joe topped up with a credit card via Apple Pay and the underlying system automatically converted it into stablecoins—while he couldn’t see the wallet and didn’t need to worry about the blockchain the entire time? That’s the future norm. Consumers will continue using familiar card payments, while settlement will be handled by stablecoins at the underlying layer. As payment rails upgrade, Visa will leverage its brand trust with both consumers and merchants to establish itself.
Compute and custody
As the number of agents grows, inference demand increases. As more ad-hoc developed services emerge, hosting demand expands. No matter which model, protocol, or interface becomes mainstream, compute providers will benefit. AWS and Cloudflare are the two most advantaged companies in this space, for similar reasons.
First, they already support most of the internet’s traffic. AWS holds about 30% of cloud infrastructure share across 37 regions worldwide. Cloudflare provides security and performance services for over 20% of websites, meaning requests to those sites all flow through its network. When new agent-oriented interfaces surge, developers will default to deploying on platforms they’re familiar with.
Second, they are building monetization infrastructure for the next generation of the internet. As ad-based models fade and paid-access models rise, both companies natively support this transition. Cloudflare has launched paid crawling services, allowing any website on its network to charge AI crawlers via x402 (Stack Overflow is already using it). And AWS is a founding member of the x402 foundation, publishing an open-source serverless x402 reference architecture. Any service running on these two platforms can easily enable native agent monetization.
Identity authentication
I’m pessimistic about companies like Worldcoin. The systems they build require human verification for every interaction. This kind of maximalist vision assumes people will care whether the online counterpart is human or an agent, but we’ve already grown accustomed to it. In my view, a more likely future is that: the primary basis for filtering most internet traffic will be small payments, not human identity credentials.
Paying for access will be more useful than “prove you’re human.”
Identity systems matter only in certain high-risk interactions. But in most agent transactions, (small) payments themselves are the trust credential.
Conclusion
When Joe wakes up, he doesn’t think about payment rails or agent identity protocols. He just looks at his phone and learns that the agents have completed the transactions, scheduled meetings, and found a cheaper server. All the technical architecture layers discussed in this article have been perfectly abstracted away, so he doesn’t need to worry about anything.
We’re still moving toward this future. The relevant protocols are already live but not yet widespread; the supply side is growing but still thin; the service discovery problem hasn’t been solved; and identity layers are highly fragmented. Most transactions today are still developer tests rather than real agent transactions. But the pace of completing the ecosystem puzzle is faster than what data indicators can show. People who are bearish on early infrastructure only watch the downward curve; whereas I’m thinking about what this picture will look like when everyone has one or a group of agents that truly have economic agency.
If you haven’t acted yet, it’s time to transition to the agent-economy model.