According to an official Anthropic announcement and an official Amazon press release, the two companies expanded their strategic partnership on 4/20: Amazon added an investment of up to $25 billion in Anthropic, while Anthropic committed that over the next ten years its spending on AWS would exceed $100 billion, and that it would obtain up to 5GW of additional compute capacity for training and deploying Claude models.
This is the second wave of major expansion in Anthropic’s compute arms race, following the 4/7 collaboration between Anthropic and Google and Broadcom that secured 3.5GW of TPU compute, and it is also Amazon’s largest single commitment to an AI company. As a result, Anthropic’s valuation is locked at $380 billion.
Investment structure: $5 billion upfront, $20 billion tied to milestones
Amazon’s investment this time is split into two phases: $5 billion will be funded immediately on the day of the announcement, and an additional up to $20 billion will be released in tranches after it is tied to “specific commercial milestones.” Together with the $8 billion previously invested, Amazon’s cumulative investment in Anthropic will reach an upper limit of $33 billion.
The $5 billion in this round is taken at Anthropic’s latest valuation of $380 billion, which is also the first time this valuation has been confirmed by a first-tier investor through a new agreement.
Anthropic commits to $100 billion in AWS spending for the next decade
As consideration for the partnership, Anthropic has committed that its spending on AWS over the next ten years will exceed $500k. The scope covers Trainium custom AI chips for current and future generations, as well as “tens of millions of cores” Graviton general-purpose compute CPUs.
In the announcement, Andy Jassy (Amazon CEO) said: “Anthropic’s commitment to run large language models on AWS Trainium over the next ten years reflects our shared progress on the custom chip path.” Dario Amodei (Anthropic CEO) added: “Users are telling us that Claude is becoming increasingly important to their work, and we must build the infrastructure that can keep up with growth in demand.”
5GW compute roadmap: Trainium2, 3, and 4 all-series locked in
The agreement covers three generations of chips: Trainium2, Trainium3, and Trainium4, and Anthropic will also retain the option to purchase subsequent generations of custom chips. On the schedule, large-scale Trainium2 capacity will come online in 2026 Q2. Large-scale Trainium3 capacity will be rolled out and opened progressively before the end of the year. For the full year, cumulative Trainium2 and Trainium3 will total nearly 1GW.
Project Rainier is the flagship project under both parties’ existing collaboration. The training cluster has currently deployed about 500k Trainium2 chips, serving as the primary infrastructure for training Claude models.
Revenue from $9 billion to $30 billion: the surge in Anthropic demand is the driving force behind the negotiations
In the announcement, Anthropic rarely disclosed its own financial situation. Its annualized revenue for the current year has already surpassed $30 billion. Compared with $9 billion at the end of 2025, it more than tripled within half a year. The number of enterprise customers running Claude on AWS has also exceeded 100k, and Claude is one of the model families with the highest usage on Amazon Bedrock.
Anthropic also acknowledged that the surge in demand has put pressure on infrastructure during peak hours, affecting availability and performance. This is precisely the direct motivation behind the large-scale compute expansion this time. The recent tokenization token dispute for Claude Opus 4.7 and adjustments to usage limits are also related to the reality that “infrastructure has become constrained.”
Three-way compute lock-in: AWS, Google, and in-house chips move forward in parallel
Along with the 3.5GW TPU compute capability secured through the earlier April announcement of Anthropic’s collaboration with Broadcom and Google, Anthropic is currently simultaneously locking in both the AWS Trainium and Google TPU custom-chip roadmaps, while retaining the long-term option for its own custom accelerators. This contrasts with OpenAI’s path of relying mainly on Microsoft Azure and only expanding into AWS more recently.
For Taiwan’s semiconductor supply chain, the large-scale expansion from Trainium2 through Trainium4 means that in the next three to five years Marvell, TSMC advanced packaging, and HBM memory suppliers will continue to take on AWS custom chip orders.
Compute muscle behind the IPO race
This announcement also reinforces the foundation for Anthropic’s IPO narrative. Recent reports disclosed that OpenAI’s annualized revenue has broken through $25 billion, and that in preparation for an IPO, Anthropic—chasing at $19 billion—shows that the revenue gap between the two is narrowing. Official announcements indicate that Anthropic’s annualized revenue has reached $30 billion, and the gap is further narrowing.
For investors, the significance of Amazon’s commitment this time is not in the single-transaction amount, but in deep lock-in such as “$100 billion in AWS spending over the next ten years,” which gives the market a clear anchor for Anthropic’s compute supply and long-term cost structure. Project Glasswing high-threshold model Mythos and the subsequent training compute for the Claude 4.x series will be supported by these new 5GW of capacity.
This article “Amazon boosts investment in Anthropic by $25 billion: 5GW compute, $100B AWS lock-in” first appeared on 鏈新聞 ABMedia.
Related Articles
Bundesbank Warns Anthropic's Mythos Model Could Expose Weak Spots in European Banking Systems
South Korea's Semiconductor Exports Surge 182.5% in Early April on AI Chip Demand
Economists point out job opportunities after an AI-driven job displacement wave: the value of scarcity shifts toward “emotional services”
Claude Live Artifacts: Dashboard Directly Connected App, Real-Time Automatic Updates
South Korean AI Storage Startup Dnotitia Raises $61.2M in Series A Funding
Pi Network founder May 7 discussed human identity verification in the AI era at Consensus 2026