The golden age of AI ASICs has arrived! The wave of reasoning frenzy is sweeping across the globe. Broadcom (AVGO.US) is directly penetrating NVIDIA’s heartland with a hundred-billion-dollar blueprint.

One of the biggest winners in the global AI boom, Broadcom (AVGO.US), announced its first-quarter financial results for fiscal year 2026, which ended on February 1, along with its performance guidance for the second quarter, on the morning of March 5, Beijing time. Overall, the latest performance data released by Broadcom and the management’s outlook for the next fiscal quarter exceeded Wall Street analysts’ expectations, especially the $100 billion revenue outlook for AI chips further validates Wall Street’s assertion that “the AI craze is still in the early construction phase of a supply-demand imbalance in computing infrastructure,” and highlights that with the arrival of the AI inference era, the demand for cloud AI inference computing power is surging. This, coupled with the trend of embedding large AI models into enterprise operations through “micro-training,” poses a strong challenge to Nvidia’s nearly 90% market share in AI chips with more cost-effective AI ASIC computing systems.

Broadcom is one of the core chip suppliers for Apple and other major tech companies, as well as a key provider of high-performance Ethernet switch chips for large global AI data centers and customized AI chips that are crucial for AI training/inference for cloud computing giants.

After announcing its exceptionally strong performance and future outlook, Broadcom’s stock surged over 5% in after-hours trading, driving stock price increases for participants in the AI computing power supply chain such as TSMC and Micron, effectively revitalizing the recently sluggish “AI faith” and proving to the market that spending from tech giants like Google and Meta, as well as AI leaders like OpenAI and Anthropic, on AI computing infrastructure remains robust. This also largely demonstrates the explosive growth in computing power demand from users of the world’s top AI application platforms like Gemini, Claude, and ChatGPT. In addition, Broadcom’s management announced a new stock repurchase plan of up to $10 billion, emphasizing that the repurchase plan will continue until the end of the year, indicating that its efforts to seize the unprecedented opportunity in AI computing spending are yielding significant results.

The most significant highlight of this earnings report is that Broadcom’s CEO stated that revenue related to “AI chips” around AI ASICs is expected to exceed $100 billion next year. CEO Hock Tan mentioned during the earnings call that the company expects its cumulative revenue from AI chips to surpass the $100 billion mark next year, marking a significant market share and technological iteration progress in the AI chip field dominated by Nvidia (NVDA.US), the highest market capitalization company globally and “superpower” in AI chips. On Wall Street, analysts are extremely optimistic about the revenue prospects of Broadcom’s AI chip business, with target stock prices concentrated between $450 and $535 over the next 12 months. In contrast, Broadcom’s stock closed at $317.53 on Wednesday.

“We have a very clear grasp of reaching this milestone by 2027,” he stated during a conference call with Wall Street analysts. “We have also ensured the chip supply chain necessary to achieve this goal.”

The company expects its AI-related chip business revenue for the current quarter to be $10.7 billion, meaning that achieving an annualized revenue level of $100 billion would imply a substantial leap in global AI computing demand. Under Hock Tan’s leadership, Broadcom increasingly ties its fate to the unprecedented AI infrastructure boom. While Nvidia remains the largest supplier in the AI chip space—specifically in the latest core chip hardware that helps train and efficiently run large AI models—Broadcom has positioned itself as a more cost-effective and energy-efficient alternative through its customized semiconductor business. Broadcom’s latest $100 billion revenue target related to “AI chips” includes revenue from “AI ASIC computing clusters” that fiercely compete against Nvidia-led AI GPUs, as well as AI networking chip products—namely high-performance Ethernet switch chip revenue.

In terms of the latest financial metrics, for the first fiscal quarter ending February 1, Broadcom’s total revenue rose to $19.3 billion, representing a significant year-on-year increase of 29%. After excluding certain items, the adjusted earnings per share was $2.05, both of which were above analysts’ previous average expectations of approximately $19.2 billion in revenue and about $2.03 in earnings per share.

Broadcom stated that revenues closely associated with AI doubled during this period, reaching $8.4 billion, with growth far exceeding the company’s prior expectations. Hock Tan noted in a statement that this growth “was driven by strong demand for customized AI ASIC accelerators and high-performance AI networking equipment.” Revenue from semiconductor solutions, including AI ASICs and smartphone RF chips, reached $12.515 billion in Q1, reflecting a significant year-on-year increase of 52%.

During the conference call, Hock Tan indicated that he expects OpenAI to begin large-scale shipments of AI ASIC computing chips developed in partnership with Broadcom next year, with a computing power scale expected to exceed 1 gigawatt. He also mentioned that demand for Google’s TPUs is very strong and will accelerate further by 2027. Broadcom also plans to ship AI ASIC chips developed in collaboration with Anthropic, the AI application leader, which is currently utilizing Google’s TPUs to achieve 1 gigawatt of computing capacity this year and over 3 gigawatts next year.

Regarding the highly focused performance outlook, the company expects total revenue for the second fiscal quarter ending May 3 to be approximately $22 billion, indicating a year-on-year growth of about 47%, significantly higher than Wall Street analysts’ average forecast of around $20.5 billion, although a few analysts’ projections exceed $22 billion.

Since the beginning of the year, the market has had strong skepticism regarding Broadcom and leaders in the AI computing power supply chain like Nvidia, worrying that the $100 billion level of AI computing spending may not be sustainable. As of the market close, Broadcom’s stock has fallen 8.3% year-to-date. Investors are increasingly concerned that the unprecedented AI spending may have a significant bubble, and even Nvidia’s explosive earnings report released last month failed to boost bullish sentiment among investors, leading to a sharp decline in Nvidia’s stock price post-report. The key doubts revolve around whether the current wave of AI will continue for the next decade or even twenty years and whether the unprecedented global AI computing spending, which could reach trillions of dollars before 2030, will generate more optimistic revenue prospects than the spending itself.

TPU is fully unleashed! The golden age of AI ASIC has arrived

In recent years, thanks to the massive AI computing orders for customized AI ASIC chips from leaders in AI chips like Google, OpenAI, and Anthropic PBC, Broadcom’s market value has surged, currently surpassing $1.5 trillion. The rising interest of global enterprises in equipping Google TPU (Tensor Processing Unit) AI computing clusters has also benefited Broadcom’s prospects, as the company has long collaborated with this tech giant to develop TPU core chips. Meanwhile, Broadcom has just shipped the first batch of its next-generation computing processors and stated that approximately six other hyperscale customers will adopt this generation of ASIC products this year.

In addition to Broadcom’s customized AI ASIC chip business, the company is also continuously upgrading its high-performance networking equipment to better connect the strong computing resources required for running AI models. Hock Tan has also built a massive software business that benefits from the cloud AI training/inference boom through acquisitions.

Broadcom’s strong earnings report is enough to demonstrate that the unprecedented strong growth logic of AI ASIC is being rapidly confirmed by “earnings report-level evidence.” The global wave of generative AI has accelerated the development process of AI chips by cloud computing and chip giants, who are competing to design the fastest and most energy-efficient AI computing infrastructure clusters for advanced large AI data centers. Broadcom and its largest competitor, Marvell, are primarily focused on leveraging their absolute advantages in high-speed interconnect and chip IP to work with cloud computing giants like Amazon, Google, and Microsoft to jointly create AI ASIC computing clusters tailored to the specific needs of their AI data centers. This ASIC business has already grown into a very important business for both companies, such as the TPU AI computing cluster co-developed by Broadcom and Google, which is a typical example of the AI ASIC technology route.

Undoubtedly, significant constraints in economics and power are forcing Microsoft, Amazon, Google, and Facebook’s parent company Meta to push for self-developed AI chips in their cloud computing internal systems along the AI ASIC technology route, with the core objective being to enhance the cost-effectiveness and energy efficiency of AI computing clusters.

The construction costs of ultra-large AI data centers, akin to “Stargates,” are exorbitant, thus tech giants are increasingly demanding that AI computing systems become more economical. Under power constraints, tech giants strive to optimize “cost per unit token and power output per unit watt,” marking the arrival of a prosperous era for the AI ASIC technology route.

Furthermore, the long-standing supply shortage and high costs of advanced AI GPU computing clusters like Nvidia’s Blackwell architecture, constrained by supply chain bottlenecks and delivery rhythms, mean that self-developed AI ASICs undoubtedly provide “second curve capacity,” allowing for more proactive procurement negotiations, product pricing, and gross margins in cloud computing services. Coupled with the fact that cloud computing giants like Google and Microsoft can integrate “chip—interconnect—system—compiler/runtime—scheduling—observation/reliability” into a unified design, improving the utilization of computing infrastructure and reducing total cost of ownership (TCO).

The AI training side, almost monopolized by Nvidia’s AI GPUs, requires more powerful generality in AI computing clusters and rapid iteration capabilities across the entire computing system, while the AI inference side, after the large-scale implementation of cutting-edge AI technologies, places more emphasis on cost per token, latency, and energy efficiency. For instance, Google has explicitly positioned Ironwood as a TPU generation “born for the AI inference era,” emphasizing performance/energy efficiency/cost-effectiveness of computing clusters and scalability. However, Amazon’s latest actions have demonstrated that AI ASICs may possess strong potential for training large models.

The AI ASIC computing system will undoubtedly continue to weaken Nvidia’s monopoly premium and some market share in the medium to long term, rather than linearly replacing the GPU system. The fundamental underlying reason is that the core competition in the inference era is no longer just “peak computing power,” but rather cost per token, power consumption, memory bandwidth utilization, interconnect efficiency, and total cost of ownership after software-hardware collaboration. On these metrics, data flows, compilers, and interconnects tailored for specific workloads make ASICs inherently more capable of achieving high cost-effectiveness than general-purpose GPUs. In the future, what is likely to happen in AI data centers is that cutting-edge training and general cloud computing will continue to be dominated by GPUs, while ultra-large-scale internal inference, Agent workflows, and fixed high-frequency loads will accelerate the shift to ASICs, ushering data centers into a truly heterogeneous computing power era.

Broadcom will lead in AI ASIC! Wall Street is optimistic about Broadcom’s stock reaching new highs

Amazon AWS has officially positioned its AI ASIC computing clusters—Trainium/Inferentia—as dedicated accelerators for generative AI training and inference, with Trainium2 offering approximately 30%–40% better price-performance compared to its AI GPU cloud instances; Google also recently publicly stated that Gemini 2.0’s training and inference run 100% on TPUs. This indicates that “super-large cloud computing companies using self-developed ASICs for core model training/inference” is no longer a proof of concept but has entered a replicable industrialization phase.

In the cutting-edge training era, the AI field primarily needs generality, software maturity, and rapid adaptation to new model architectures, thus GPUs have a natural advantage. However, as the industry transitions from “training scarcity” to “inference scaling, agentization, long context, and low latency,” the core KPIs will shift from “maximum peak computing power” to cost per token, throughput per watt, and system-level TCO. This is the fundamental reason for the collective acceleration of ASICs by hyperscalers (cloud computing giants), exemplified by Google explicitly defining the Ironwood TPU as the best computing cluster for the “inference era,” scalable to 9,216 chips; Microsoft positioning its newly launched AI ASIC Maia200 directly as an accelerator for cloud computing inference, claiming it achieves 30% stronger performance per dollar than its current latest generation hardware; and AWS defining Trainium3 as a chip pursuing “optimal token economics,” emphasizing over four times improvement in energy efficiency, all collectively indicating that as cloud computing giants initiate an “AI computing cost revolution” to promote the scaling of AI ASIC penetration, market concerns about Nvidia’s growth prospects are indeed warranted.

According to a research report by Counterpoint Research, Broadcom is expected to maintain its absolute leading position in the field of AI data center server ASIC design partners in 2027, with market share reaching 60%. Counterpoint also anticipates that by 2028, shipments of AI server ASICs will exceed 15 million units, surpassing the overall shipments of data center AI GPUs.

Counterpoint expects that as Google, Amazon, Apple, Microsoft, ByteDance, and OpenAI accelerate the deployment of massive AI server computing clusters for training and inference workloads, ASIC shipments are expected to grow more than twofold by 2027. Counterpoint indicated that this rapid growth is driven by demand for Google’s TPU infrastructure (to support the Gemini project), the ongoing expansion of Amazon’s Trainium clusters, and the capacity increases brought about by Meta’s MTIA and Microsoft’s Maia chips as they expand their internal product lines.

Wall Street analysts are extremely optimistic about Broadcom’s revenue and profit growth prospects related to its AI business, with target stock prices concentrated between $450 and $535 over the next 12 months. In contrast, Broadcom’s stock closed at $317.53 on Wednesday. Among the 55 Wall Street analysts tracking this stock long-term, 96% gave a rating equivalent to “buy,” the most optimistic outlook, with an average target price of approximately $454.

  1. The “long-term bull market logic” of Wall Street institutions regarding Broadcom largely revolves around three core points: explosive growth in AI computing business—Broadcom, as the most critical technology partner for Google’s TPU AI computing cluster, directly benefits from the expanding AI capital expenditures of cloud giants (such as Google, Meta, and OpenAI); an increasingly large backlog of orders; and the stability of infrastructure software (VMware)—the acquisition and integration of VMware Cloud Foundation (VCF) is progressing smoothly, providing Broadcom with strong cash flow and a growth engine closely related to cloud AI training/inference infrastructure software.
AVGOX-3.2%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin