Artemis: The New Machine Economy Era of 2030 — Who Will Be the Ultimate Winner

Author: Lucas Shin, Source: Artemis, Compiled by: Shaw Golden Finance

Overview

  • By 2030, intelligent agents (AI Agents) will become the primary way people use the internet.

  • A brand-new agentic network will require new payment rails, a new monetary system, and foundational components.

  • Value will concentrate across three major layers: the interface layer, the entity that controls user interactions; the payment layer, the entity that intervenes in capital flows; and the compute & custody layer, the entity that operates the infrastructure

  • Long-tail intelligent agent business activity will run on open protocols.

Let’s first paint a picture of a scenario.

It’s the year 2030. You’re 24 years old, living in Burlington, Vermont, and you love investing—mostly allocating to US stocks, and also participating in some cryptocurrency and prediction market trading on Kalshi. Two months ago, you started a fintech consulting company on the side.

Some days—like today—the opening always comes out of nowhere.

Buzz—

The phone’s ringtone wakes you up, like a bucket of cold water splashed across your face. It’s a message from your personal intelligent agent, Nexus:

Good morning, Joe. I completed the following work overnight ——

Portfolio update: Reduced your $WMT position by 15% overnight. Satellite data shows foot traffic at stores has fallen, and earnings-call sentiment has shifted bearish—cross-validated.

Schedule update: 3 meetings have been booked for this afternoon, and the brief has been pinned into the meeting notes.

Expense optimization: Found a new cloud server provider—similar performance, annual fee reduced from $840 to $290. Can be migrated at any time.

Total spend: $0.67

What exactly happened while you were asleep?

  1. Nexus dispatched a research sub-agent, costing $0.24, and overnight pulled information from 40 different data providers. It compared Walmart’s latest earnings call contents with satellite images of parking lots at stores across the US, updating your investment logic. When the satellite data shows Walmart’s customer traffic is declining, your portfolio agent cross-referenced Kalshi’s earnings-sentiment market, confirmed the bearish signal, and completed the position reduction before you woke up. Four years ago, these trading strategies were still the exclusive domain of Citadel Securities (Citadel) and a small number of quant funds; they had to pay millions of dollars to subscribe to satellite imagery. Even a Bloomberg terminal costing $30k per year couldn’t cover all the information—you still had to separately subscribe to satellite imagery, alternative data, and spend hours integrating and analyzing it. And now, a 24-year-old in Vermont can get the same information edge as a quant analyst at Citadel Securities at a cost of less than a cup of coffee.

  2. Nexus’s sales sub-agent filtered 200 leads that matched your target customer profile—fintech companies in the southeastern US, Series B and later, that haven’t used data service providers yet—and completed information enrichment at a cost of $0.002 per lead. The interfaces it used were developed by another agent and listed on an open market. It selected the 3 leads with the highest intent signals, then immediately contacted their scheduling agents to negotiate meeting times. Before each meeting, it pulled the prospective customer’s graduation school, shared connections, company news, and funding history, and compiled a one-page brief for you—pinned into the meeting notes. Just the lead-information enrichment alone: if you go by SaaS subscription, each account costs $200 per month.

  3. Nexus’s operations sub-agent ran comparative tests between your consulting website and 6 server providers: Vercel, Render, Railway, Fly.io, Netlify, and Cloudflare. It called each provider’s trial API interfaces at extremely low cost, deployed a test environment, and measured latency, availability, and throughput. In the end, Railway achieved equivalent performance at one-third the cost. Through Railway’s pricing agent, Nexus negotiated the monthly fee, built a site mirror on the new servers, and completed the full suite of tests to ensure everything runs properly. Without agents, this would take at least a week: searching the web, contacting for quotes, plus an anxiety-inducing manual migration process. You only need to confirm execution to Nexus.

Your agents completed all of this for just $0.67.

Now, multiply this scenario by every knowledge worker worldwide, every enterprise, and every intelligent agent in operation.

Buzz—

Nexus: Insufficient balance. Remaining: $1.87.

As you did last week, you top up $5 using the credit card linked via Apple Pay, then continue brushing your teeth. At the underlying layer, that $5 is converted from your credit card into stablecoins—but you can’t see the wallet at all. You don’t need to think about deposits, and you never need to touch the blockchain.

This is a glimpse of the machine economy—a brand-new business scenario where AI agents continually spend money on things humans have never paid for, with transaction scale and speed far beyond the realm of human commerce. You can imagine that tens of billions of transactions could be generated every day.

But today’s internet isn’t ready to support all of this.

At present, the internet is designed for humans. It filters non-human behavior via rate limiting, CAPTCHAs, and API keys, and monetizes through advertising to human users. However, as large numbers of autonomous agents emerge, this business model will completely fail.

Traffic surges; effective attention plummets.

Network servers long subsidized by ad revenue will face an order-of-magnitude increase in requests—requests whose source will never be affected by ads.

Agent payments naturally solve this problem, and micro-payments will become the key to access.

Pay to crawl, pay to access, pay to use.

Companies that build the infrastructure that ultimately gets widely adopted by agents will capture the largest new pool of economic activity that our generation will ever see. The current giants are already rushing to claim their positions, but the machine economy will also spawn its own new giants. In the last wave of the “new internet,” it gave rise to Google, Amazon, Facebook, PayPal, and Salesforce.

The era of an agentic internet is almost here.

Market size outlook

By 2030, most internet interactions will no longer be carried out through browsers. Our intelligent agents will browse, test, negotiate on your behalf, assemble teams of sub-agents, and execute transactions. Every task they complete will generate a chain of small payments. Even though the per-transaction costs look like new expenses, they’re actually replacing tools and labor that cost far more. The more advanced the tools are, the better the agents perform—and the more autonomy we’ll grant them.

Demand and adoption pace

Let’s do a rough estimate.

In the earlier example, Joe’s agents completed hundreds of transactions for only $0.67. If you scale this to a mid-sized company with 500 employees—each employee equipped with a personal agent, plus hundreds of shared agents across departments like sales, finance, legal, operations—then it’s easy for the company to generate 100k transactions initiated by agents every day.

There are over 1 billion knowledge workers worldwide, and 88% use AI at work. The demand-side volume is enormous and continues to grow. But for now, most of these uses are limited to basic tasks such as web search, document summarization, or writing emails. The shift fully to intelligent agents hasn’t arrived yet—but once it starts, the speed will be extremely fast.

Instagram reached 100 million users in 30 months, TikTok in 9 months, and ChatGPT in just 2 months (Reuters / UBS data). One reason ChatGPT spread quickly is that the conversation interface is already familiar to humans, and there’s no need to learn new software or change usage habits—you just describe what you need, and the agent will find a way to get it done.

The only obstacle is trust, and the pace at which trust is built is far faster than people expect. Claude Code has already contributed 4% of all public code commits on GitHub (over 135k times per day). Based on the current growth rate, it’s expected to surpass 20% by the end of 2026. That means a 42,896x increase in 13 months. Developers took only a little over a year to move from skepticism to having AI produce production-grade code at scale.

As models become smarter, the interfaces get more streamlined, and more of the technical complexity is abstracted away, I believe the adoption speed of intelligent agents will accelerate further.

By 2030, even if only 60% of knowledge workers use agents, daily spending will be $3 to $5 (this is a conservative estimate—remember, Joe completed three tasks before breakfast for $0.67). The annual transaction scale of agents on personal devices alone will reach $800 billion to $1.4 trillion.

Enterprise market

Robbie Peterson of Dragonfly points out in an article that commercial intelligent agents are a reasonable next evolution for the SaaS model. I agree wholeheartedly. They are no longer just assisting workflows—they will completely replace existing workflows. Just as today more than 95% of software spend comes from enterprises and government institutions, agent usage and spending on the enterprise side are very likely to vastly exceed the personal market.

We are already witnessing this change. Klarna replaced Salesforce with its in-house AI system, saving roughly $2 million. ZoomInfo built AI agents to replace its transaction approval department, saving over $1 million per year. These are early examples where a single workflow being agentified saves millions of costs. Every company has hundreds of such workflows across sales, finance, legal, operations, and research & development. Once intelligent agents are deployed across the entire company, the total spending involved will be staggering.

Anyone can become a merchant

As code agents greatly reduce development costs, the entry barrier for internet merchants is approaching zero. A wedding planner who’s good at venue selection can package the best workflow and sell it. An independent developer in Lagos can develop a vertical-domain API and start earning income within hours from agents around the world. You only need to have domain expertise—generate an API interface through prompts—and you can start collecting payments.

But what happens if agents start selling services to other agents?

Let’s assume Joe, mentioned earlier, wants to enter a new field: a mid-sized healthcare company in the US Midwest with legacy payment infrastructure. If his agent reasons through from scratch, token costs would quickly accumulate:

  • Filter 200 companies matching a specific profile (inference + API calls): about 500k tokens

  • Enrich each lead’s information (tech stack, funding, hiring data): 200 leads × about 5,000 tokens = 1M tokens

  • Lock in decision-makers of core customers: about 200k tokens

  • Score intent signals (hiring cadence, contract cycles): about 300k tokens

  • Research each decision-maker’s background: 20 leads × about 10k tokens = 200k tokens

  • Write personalized outreach copy: 20 leads × about 3,000 tokens = 60k tokens

Total: about 2.3 million tokens. Using a frontier model like Opus 4.6 as the basis, the cost is between $8 and $15.

Wait—didn’t Joe’s sales sub-agent do something similar before for only a few cents?

That’s right. Because most steps are already handled by other agents. Lead enrichment, intent scoring, and scheduling all have packaged interfaces on the open market—priced at fractions of a cent.

This model creates an entirely new commercial scenario. The supply side of the market grows in both directions: humans build services, and agents also build services. A high-token-cost problem solved by one agent can become a cheap tool that all subsequent agents can use. In a world like this, agents can distill their experience into workflows and sell them to other agents, thereby subsidizing their own operating costs.

Every paradigm shift creates new merchants. Shopify empowered e-commerce sellers, Stripe empowered online businesses—and the machine economy will empower impromptu developers and autonomous intelligent agents.

A reality check

So, how far are we from truly commercialized agent-to-agent transactions?

My Artemis team has been tracking the progress of two major agent payment protocol directions: Coinbase’s open x402 protocol, and the Machine Payment Protocol (MPP) jointly launched by Stripe and Tempo. In simple terms, these two categories of protocols have exactly the same goal: enable users or agents to pay for any network service (for example, data, web crawling, model inference, or other API services) within a single network request—eliminating the tedious processes like registering accounts, API keys, billing settlements, and more.

For now, everything is still at an early stage.

The x402 protocol’s transaction volume at the end of 2025 is artificially inflated by meme coin hype and leaderboard-boosting volume behavior. The chart above shows the “real” transaction activity adjusted after filtering out fake transactions via proprietary algorithms. Once you remove the noise from fake transactions and meme coin hype, it becomes clear that the agent economy hasn’t truly arrived yet. Right now, most activity is developers testing paid APIs and AI tools—not genuine operation of agent-economy entities making transactions.

Before this model truly takes off, there are two core problems that need solving:

  1. The supply side isn’t formed yet: there’s a serious shortage of practical API interfaces that can generate genuine willingness to pay from agents.

  2. Lack of a mature discovery and aggregation layer: even if high-value interfaces exist, agents currently have no reliable way to find them.

Since the whole ecosystem is still developing, it’s too early to use transaction volume as the main measurement indicator. A more reasonable observation metric is the growth of the supply side—namely, the number of merchants providing services to agents. We’ll refer to these merchants as service providers.

The chart above shows the cumulative change over time in the number of service providers (sellers) that meet the standard. A seller must satisfy: completing more than two “real” transactions, and having at least two independent buyers. In October last year, the number was still below 100; now it has exceeded 4,000. I expect this growth rate will accelerate, mainly driven by three trends:

  1. Artificial intelligence is lowering the barrier to creating digital products (as described earlier), meaning more people and AI agents will become merchants.

  2. New services will be designed with “agent-first” as the guiding principle. Agents are becoming the core customers, and the product forms built for them will be completely different: use APIs instead of webpages; use instant access instead of registration flows; use pay-as-you-go instead of subscription models.

  3. Existing service providers will be forced to transform. As more users interact through AI interfaces rather than manually browsing webpages, ad-dependent business models will completely fail—because there won’t be human user attention left to monetize. Companies will have no choice but to charge directly for content and services.

These forces will create a positive feedback loop where supply and demand amplify each other, ultimately igniting the entire agent economy.

Industry landscape

The architecture of the agent transaction ecosystem is rapidly taking shape. Large numbers of startups are popping up like mushrooms after rain, each focusing on filling a specific gap within the architecture; at the same time, fast-growing fintech and SaaS companies are also transitioning toward native agent transactions. Over the past twelve months, almost all major payment giants and AI labs have launched or announced protocols related to agent transactions.

We’ve mapped more than 170 companies spanning five major layers: interaction interfaces, intelligent agents, account systems, payment infrastructure, and AI engines. Here we’ve condensed it to about 80 core institutions:

We break it down layer by layer from top to bottom.

Interface layer

The interface layer is closest to users. It’s responsible for routing user intent (requests) to the tools or services required (supply). Whoever can define how intelligent agents discover, evaluate, and select services will have enormous dominance over all the lower layers. In this layer, we’ll focus on the two most important categories:

User interface

This is the entry point where most people directly interact with intelligent agents. Apple, Google, OpenAI, Anthropic, xAI, and Perplexity are all building these kinds of interaction interfaces, and their forms are rapidly moving beyond just a chat mode. New formats like voice assistants, desktop assistants, embedded copilots, and browser agents keep emerging—closer to real usage scenarios. The platform that becomes the default AI interface for users will be the starting point for every transaction initiated by agents. The winner in this track will gain an additional huge advantage.

AI labs have long crawled and trained on the entire internet’s data. Now, the remaining best training data is human guidance feedback. Every time you accept or reject a response, make corrections, or provide preference information to Claude or ChatGPT, the interface you use captures that data—for sale or for model training. Controlling the interface means controlling the feedback loop that can optimize user experience and even the model itself. This is also why Anthropic launched Claude Code, why Google acquired Windsurf, and why OpenAI tried to acquire Cursor. Once your agent accumulates contextual information about your preferences, workflows, and commonly used tools, your migration cost becomes extremely high.

Service discovery

When Joe’s agent needs a lead-enrichment interface or a satellite data provider, how does it find the right service? This may be the biggest unresolved challenge in the entire ecosystem architecture. Most existing solutions are lists of tools hard-coded in advance or curated service marketplaces. Major platforms are building their own systems: OpenAI and Stripe have launched ACP, Google and Shopify have launched UCP, and Visa has launched TAP. Fundamentally, these are merchant directories that only work if both the platform and the merchants actively integrate. This model performs well in typical scenarios, but as the barrier to creating and selling digital services drops dramatically, a large number of niche, highly customized applications will emerge, and the curated model won’t meet these long-tail needs.

Companies represented by Coinbase, Merit Systems, Orthogonal, and Sapiom are building open alternatives. They create aggregators and underlying infrastructure so that agents can autonomously discover and pay for services at runtime, without needing prior integration or business partnerships. As the supply side (i.e., network resources) grows exponentially, the difficulty of solving this problem becomes extremely high. But whoever can crack ranking and recommendation systems—so agents match the right services at the right time—will gain tremendous influence in the industry.

Agent transactions will ultimately move toward curated closed models or open-ecosystem models, and how this landscape determines value allocation—that is one of the core debates in this field. We’ll explore this topic in depth later.

Intelligent agents and account layer

To accomplish tasks for us, intelligent agents alone being intelligent is far from enough. Joe’s sales sub-agent completed the entire workflow: screening 200 leads, enriching information, and booking three meetings—while Joe doesn’t need to configure any tools, manage API keys, or approve every step one by one. Most of the foundational infrastructure that makes all this possible is invisible to end users. But without these facilities, agents are just large language models with no execution capability. Below is an overview of the core foundational components required to enable all of this:

Tools and standards

These protocols and frameworks grant intelligent agents the ability to interact with the external world. MCP (Machine Communication Protocol, initiated by Anthropic and currently managed by the Linux Foundation) allows agents to connect to external data and tools: calling APIs it has never seen before, reading databases, or invoking a service on the spot. A2A (proposed by Google) defines how agents built on different platforms can discover and collaborate with each other. LangChain, the frameworks released by Nvidia and Cloudflare, provide developers with foundational modules to build and deploy agents on top of these protocols. OpenClaw, recently acquired by OpenAI, integrates context management and tool calling into a single localized-first framework, drastically reducing the difficulty for developers to build agents that can autonomously discover and pay for services.

The core question in this domain is: will these standards ultimately unify, or will they fragment? Can commercial frameworks built on these standards capture value before the tools become commoditized?

Identity authentication

After agents can communicate, trust must be established. Before agents can conduct transactions or sell services, they must prove their authorized entity and operational permissions, and keep an audit trail that other agents can verify.

Today there are multiple technical paths, including: biometric identity verification (Worldcoin, Civic), on-chain agent reputation systems (ERC-8004), and verifiable credentials (Dock, Reclaim).

This domain has wide design space—and extremely high risk. How much can your agent spend at most before it receives your approval? Can it sign contracts on your behalf? Can it delegate permissions to sub-agents? These kinds of rules and security boundaries will most likely be finalized in the account layer.

Wallet

Obviously, for agents to make payments, they must have wallets. Many vendors—including Coinbase, Safe, MetaMask, Phantom, MoonPay, Privy—are building this space and offering functionality such as programmatic access and creation, permission delegation, per-transaction spending limits, whitelists for receiving, and multi-chain operation without requiring users to manually confirm every single operation. This is one of the most fiercely competitive tracks in the entire ecosystem—and it also raises a key question: where exactly is the corporate moat? Will this space ultimately become commoditized?

Payment layer

The payment layer sits deeper in the overall architecture and should be invisible to end users, but in the machine economy every unit of capital will flow through it. When Joe’s agent pays $0.24 overnight to pull data from 40 service providers, he doesn’t need to choose a card network, a currency, or a settlement chain for each transaction.

The core challenge is that traditional payment rails were designed for humans to click a “buy” button, not to accommodate agent API calls made thousands of times per minute with per-call amounts under a cent. Each card-network transaction has a fixed cost of about $0.03–$0.04, plus a 2.3%–2.9% fee. This works for a $400 hotel order, but it’s completely unsuitable for new multi-step agent transactions.

This has given rise to a new set of protocols and monetary systems specifically built for agent transactions, while legacy giants are also modifying existing infrastructure to adapt to these needs.

Key points are as follows:

Payment rails

These protocols and standards define how intelligent agents initiate, route, and complete payment settlement. At present, two main technical routes have emerged:

  1. x402 (Coinbase/Cloudflare) and MPP (Stripe/Tempo) are designed specifically for machine-native transactions: agents call interfaces, fetch quotes, sign payments, and receive data—all completed within a single HTTP request. Settlement uses stablecoins, and the cost per transaction is only a fraction of a cent.

  2. ACP (OpenAI/Stripe), AP2 (Google/PayPal), and Visa’s TAP take a different approach: they modify existing card-payment infrastructure to adapt to agent scenarios. These solutions are better suited to high-value transactions. Compared with settlement speed and cost, buyer protection and merchant acceptance coverage matter more.

Stablecoins and settlement

Intelligent agents need programmable, fast, low-cost, globally usable money. Stablecoins perfectly match these requirements, making them the natural choice for x402 and MPP transactions. At the same time, card payment rails can still provide buyer protection, and merchant usage habits are mature—important for high-value transactions. Underlying public chains (such as Base, Solana, Tempo) introduce another key question: which chains can support the throughput, transaction finality, and cost structure required for large-scale agent-level transactions?

Service providers

These organizations act as intermediaries between intelligent agents and merchants. They handle complex elements such as compliance review, merchant onboarding, and permission authentication. Coinbase, Stripe, and PayPal are expanding their existing ecosystems to support agent transactions; they are betting that their merchant networks and compliance infrastructure can form a competitive advantage. Others like Sponge and Sapiom solve the cold-start problem from the emerging merchant side, making it easy for any API-based business to start accepting agent payments. As payment rails, protocols, and merchants keep growing, coordinators have the potential to become a critical connective tissue preventing the entire system from fragmenting.

AI engine layer

This layer needs little introduction. All agent interactions, reasoning steps, and tool calls are driven by it. But the business model changes in this layer are happening much faster than in other parts of the architecture, and where value ultimately flows may not be as clear as it seems on the surface. We focus on two categories in particular:

Compute and custody

Every time Joe’s intelligent agent performs inference for a task, calls tools, or creates sub-agents, it consumes compute. But model inference is only part of the picture. With the explosive growth of low-code / rapid-development applications and agent-built services, many new interfaces are appearing—and all of them need hosting substrates. As of May 2025, the number of accessible webpages grew by 45% in just two years. And as code agents make launching new services extremely easy, this growth rate will only accelerate further. This means compute demand is growing from both ends simultaneously: on one side, more agents process more tasks; on the other, more services keep launching to meet their needs.

Hyper-scale cloud providers (AWS, Google Cloud, Nvidia) are obviously core participants. Among them, AWS and Google Cloud are continually simplifying deployment processes for agent backends and APIs on their infrastructure. Cloudflare focuses on edge computing and provides low-latency serverless compute for agent-oriented services. Decentralized compute platforms like Akash, Bittensor, and Nous satisfy excess compute demand by integrating global GPU resources and selling them at very low prices.

Base models

Base models are the “brain” of the entire system. Anthropic, OpenAI, Google, and Meta, as frontier labs, continuously expand the capability boundaries of intelligent agents, while the cost of running these models is dropping rapidly. At the end of 2022, the cost of running GPT-4-level models was about $20 per million tokens; by the beginning of 2026, models with comparable performance had dropped to around $0.05 per million tokens—a drop of 600x in just a little over three years. Hardware upgrades, competition among vendors, and optimization techniques like prompt caching and batching all contribute to driving down inference costs. At the same time, as inference logic is distilled into smaller open-weight models and running costs are extremely low, the cost of building intelligence also drops significantly. In some benchmark tests, the performance gap between open-weight models and closed models has narrowed to only 1.7%.

This is a major positive for the machine economy.

Cheaper intelligence means cheaper agents, allowing a 24-year-old independent founder in Vermont to easily afford operating costs—thereby further increasing the transaction activity across upper layers of the ecosystem. If base models end up stuck in a price competition like today’s cloud service providers, value might ultimately concentrate in the upstream and downstream layers of the model, rather than in the model itself.

Who will be the winner?

By 2030, most of your digital interactions won’t need a browser, a search engine, or an app store. You only need to state your requirements, and intelligent agents will handle everything end-to-end: find suitable services, negotiate terms, complete payments, and deliver the final results. The internet will look radically different.

You can think of it as search engine optimization for agents. There will be more and more API interfaces, and fewer and fewer human-facing interactive interfaces.

In a world like this, who will capture value?

Sam Laggdale from Merit Systems wrote an article comparing today’s agent transaction ecosystem to the early internet. He argues that the curated agent service marketplaces built by major platforms (ACP, UCP, TAP) are taking the old road of 1990s America Online (AOL)—a polished experience and a closed system, but the core limitation is that all service providers must go through manual selection and review. Open protocols like x402 and MPP may be more coarse-grained, but they are permissionless: anyone can build interfaces, without business teams or legal reviews, and earn revenue through agents. In the 1990s, the closed garden product experience was better, but the open internet had unlimited possibilities.

In the end, the open internet wins.

The same logic is repeating. ACP, UCP, and TAP will connect with top AI labs and serve mainstream scenarios well, but they are limited to agents that can access only pre-vetted service provider directories, meaning they can only complete tasks the platform predefines. Agents that can connect to the entire open-protocol ecosystem will have much broader capability boundaries.

To know: the most energetic part of the internet today is precisely the long-tail traffic from open websites enabled by HTTP.

We must humbly admit that we cannot imagine the full shape of an open agent internet. Just as in 1995 nobody could predict the emergence of ride-hailing or social media, once we provide the tools needed for agents, we also cannot predict what they will create, and which services they will pay for.

As we discussed earlier, base models are quickly moving toward commoditization, and value may shift to other layers in the technical architecture. Development tools, wallets, and identity infrastructure are crucial—but as standards converge, these areas will likely commoditize as well. Therefore, I believe value will concentrate in three areas: interface layers, payments, and compute.

Interface layer

The interface layer determines spending caps, approval workflows, and trust delegation mechanisms. A platform that can create the most personalized experience for users will carry the most transaction traffic.

Apple is the most underestimated player in this area. Its devices are deeply embedded in people’s daily lives, and user migration costs are extremely high. If Siri evolves into a mature agent interaction entry point, Apple doesn’t need to build the top-tier model to control the starting point for billions of transactions. They only need to maintain the best interaction entry point.

Google’s transition is even harder. Moving from humans manually browsing to agent-driven intelligent filtering will erode its core advertising revenue. But Google has advantages other companies can’t match: it has accumulated decades of personal data across search, email, calendars, maps, and documents. It also needs to consider enterprise-side migration costs—Google Workspace is already embedded in millions of companies, and employees’ email, files, and workflows run on Google infrastructure. If there’s any company that can build the most personalized agents for both consumers and enterprises, it’s Google. The question is whether it can monetize agent services as efficiently as it monetizes search traffic.

Merit Systems is my dark horse. They are building agent discovery infrastructure for the open agent economy (AgentCash, x402 scanning, MPP scanning) and also developing consumer-side interfaces (Poncho). The core logic is: whoever can control the agent’s service-discovery channels and get involved in the capital-flow layer will take the position Google had in the early internet. This is an ambitious bet, but if open agent transactions beat the curated closed mode, Merit will become the most advantageous aggregation layer. Right now, it’s still early, much like when Google competed with AOL’s closed ecosystem adjusted by market cap to today’s roughly $350 billion.

Payments

Whoever controls the flow of funds will take a cut from every transaction. I’m most confident about the prospects at this layer because its scale will grow in lockstep with transaction volume.

Stripe and Tempo are best positioned in machine-native payments. Stripe already has a mature developer ecosystem and a massive merchant network. Tempo, meanwhile, has streaming payments, ~500-millisecond transaction finality, streaming support for payment rails, native support for bank cards and stablecoins, paying Gas fees in dollars (no token volatility risk), and server-assisted pay transactions, among other features—built specifically for the massive transaction volumes of the machine economy. If MPP becomes the default machine-native payment rail, Stripe and Tempo will take a cut from each agent transaction.

Circle will grow alongside the expansion of the agent economy. I firmly believe stablecoins will become the settlement layer for the machine economy. Then Circle will earn profit from reserve fund yields, taking a share from every dollar of funds in agents’ wallets. USDC is the stablecoin with the widest acceptance across exchanges, wallets, public chains, and payment protocols. New developers will preferentially choose it, further deepening its ecosystem integration and making it harder for competitors to enter.

Visa will complete the adaptation. Remember when Joe topped up with a credit card through Apple Pay, and the underlying layer automatically converted it into stablecoins while he couldn’t see the wallet and didn’t need to worry about the blockchain at all? That’s the future default. Consumers will continue using familiar bank cards, while underlying settlement is done via stablecoins. As payment rails upgrade, Visa will rely on its brand trust with consumers and merchants to establish a foothold.

Compute and custody

As the number of agents grows, inference demand increases. As more ad-hoc services are built, hosting demand expands. No matter which model, protocol, or interface becomes mainstream, compute providers will benefit. AWS and Cloudflare are the two most advantageous companies in this space, for similar reasons.

First, they already support most of the internet’s traffic. AWS holds about 30% of cloud infrastructure share across 37 regions worldwide. Cloudflare provides security and performance services for over 20% of websites, meaning all requests to those sites flow through its network. When new agent-facing interfaces explode in quantity, developers will default to the deployment platforms they’re familiar with.

Second, they’re building monetization infrastructure for the next generation of the internet. As advertising models fade and paid-access models rise, both companies are natively supporting this transition. Cloudflare has launched a paid crawling service, allowing any website in its network to charge AI crawlers via x402 (Stack Overflow is already using it). And AWS is a founding member of the x402 Foundation and has released an open-source serverless x402 reference architecture. Any service running on either of these two platforms can easily enable native agent monetization.

Identity authentication

I’m pessimistic about companies like Worldcoin. The systems they build require human verification for every interaction. This extreme “ends justify the means” idea assumes people will care whether the online interaction partner is human or an agent—but we’ve already gotten used to this. In my view, the more likely future is: the basis for filtering most web traffic will be micro-payments, not human identity credentials.

Paid access will be more practical than “prove you’re human.”

Identity systems will matter only in some high-risk interactions, but in most agent transactions, (small) payments themselves are the trust credential.

Conclusion

When Joe wakes up, he won’t think about payment rails or agent identity protocols. He just checks his phone and learns that the agents have finished the transactions, booked the meetings, and found cheaper servers. All the technical architecture layers discussed in this article have been perfectly abstracted away—he doesn’t need to worry about anything.

We’re still moving toward this future. Relevant protocols are live but not widely adopted; the supply side is growing but still thin; the service discovery problem hasn’t been solved; and the identity layer is highly fragmented. Most transactions today are just developers testing, not real agent transactions. But the ecosystem puzzle is being completed faster than what the data metrics can show. People who look bearish on early infrastructure only watch the downward curve; while what I’m thinking about is what this picture will become when everyone has one or a group of truly economically capable agents.

If you haven’t acted yet, it’s time to transition to the agent-economy model.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin