CITIC Securities: Recommend paying attention to leading domestic AI PCB/Copper Clad Laminate (CCL) manufacturers, storage vendors, and others.

robot
Abstract generation in progress

CITIC Securities research report states that NVIDIA indicated at GTC 2026 that the demand for AI computing power will continue to grow strongly in 2027. CITIC Securities believes that the addition of LPU and midplane in the Rubin/Rubin Ultra architecture, along with enhancements in specifications and usage, will drive further expansion of demand, with AI PCBs benefiting significantly from this; CPO is expected to be the first to land in Rubin’s Scale-out architecture, with Scale-up applications anticipated to begin on the 2028 Feynman platform. We are optimistic about NVIDIA’s GTC 2026 conference further strengthening market confidence in the sustained growth of the AI industry and the realization of incremental logic, and we recommend paying attention to leading domestic AI PCB/copper-clad laminate (CCL) manufacturers and storage manufacturers.

The full text is as follows

Electronics | Review of NVIDIA GTC 2026: Optoelectronics Progress

NVIDIA stated at GTC 2026 that the demand for AI computing power will continue to grow strongly in 2027. We believe that the addition of LPU and midplane in the Rubin/Rubin Ultra architecture, along with enhancements in specifications and usage, will drive further expansion of demand, with AI PCBs benefiting significantly from this; CPO is expected to be the first to land in Rubin’s Scale-out architecture, with Scale-up applications anticipated to begin on the 2028 Feynman platform. We are optimistic about NVIDIA’s GTC 2026 conference further strengthening market confidence in the sustained growth of the AI industry and the realization of incremental logic, and we recommend paying attention to leading domestic AI PCB/copper-clad laminate (CCL) manufacturers and storage manufacturers.

NVIDIA expects order demand to grow to $1 trillion by 2027.

On March 16, during the keynote speech at NVIDIA GTC 2026, CEO Jensen Huang projected that orders for Blackwell and Rubin would reach $500 billion by 2026; the company expects order demand to grow to $1 trillion by 2027 during this conference. Currently, 60% of NVIDIA’s business comes from the top five global hyperscale cloud service providers, with the remaining 40% spread across regional clouds, sovereign clouds, enterprises, industrial applications, robotics, edge computing, and other fields. According to the latest financial reports from the four major CSPs in North America, in Q4 2025, the overall business performance of North American tech giants continued to outperform market expectations, with accelerated revenue growth in cloud services, a tight supply-demand structure, and rising storage chip prices driving capital expenditure guidance for 2026 significantly above expectations. We estimate that capital expenditure (CAPEX) for the four major CSPs in North America will increase by 58% year-on-year in 2026, and AI CAPEX will increase by 117% year-on-year, which is expected to support the performance of AI computing chips.

▍ NVIDIA launches five rack-level Vera Rubin computing platforms.

NVIDIA has introduced its new Vera Rubin platform/POD, combining five rack-level computing systems into a single AI supercomputer to support efficient reasoning for Agentic AI. Specifically: 1) The Vera Rubin NVL72 rack has a reasoning computing power of 3.6 exaflops (FP4), five times that of Blackwell; 2) The Vera CPU rack is mainly used for scheduling and managing Agentic workflows; 3) The Groq 3 LPX rack (equipped with 256 Groq 3 LPUs) will serve as a token reasoning accelerator, used in conjunction with the Vera Rubin NVL72, relying on its large-scale on-chip static random-access memory (SRAM) for efficient reasoning; 4) The BlueField-4 STX rack is used to support the storage required for long context reasoning in Agentic AI; 5) The Spectrum-X CPO switch rack is used for scale-out and has been fully mass-produced.

▍ In terms of PCBs, the incremental applications of orthogonal backplanes, LPX motherboards, CPU cabinet motherboards, etc., have been confirmed.

NVIDIA officially announced the Rubin Ultra NVL144 Kyber rack, which consists of 144 GPUs forming a single NVLink domain, with computing nodes and NVLink switches connected from both sides. The Kyber rack (NVLink 144) is then extended through Oberon copper cables/optics to NVLink 576. According to SemiAnalysis, using PCB orthogonal backplanes can achieve high-density, high-speed signal transmission while reducing signal loss and cable routing complexity, utilizing ultra-high multilayer designs, and is expected to apply the most advanced M9 materials. We estimate that the PCB orthogonal backplane could bring a single GPU ASP increase of over $200; at the same time, NVIDIA confirmed that the Grok LPU chip LP30 will be manufactured by Samsung, which has already entered mass production, and is expected to be shipped in the form of LPX cabinets in the third quarter of 2026. NVIDIA suggests that if there is a large demand for code generation or high-speed tokens, 25% of the computing power can be allocated to Grok, while the remaining 75% should remain with Vera Rubin. According to SemiAnalysis, the LPX cabinet motherboard may adopt a 50L+ high multilayer + M9 CCL design, and we estimate that the single GPU ASP could reach several hundred dollars; in addition, NVIDIA confirmed that it will launch CPU cabinets, which will also use PCB motherboards, further enhancing the potential growth space for future AI PCBs.

▍ The Feynman platform adopts a new chip through deep heterogeneous integration, and the scale-up scheme supports copper cables and CPO.

NVIDIA’s product roadmap shows that the company will launch the Feynman architecture in 2028. This platform will deeply integrate CPU (Rosa), GPU (Feynman), and LPU (LP40) at the hardware level through heterogeneous integration, and will simultaneously adopt both copper cable and CPO solutions in scale-up interconnections. Among them: 1) The GPU will use TSMC A16 (1.6nm) process nodes; 2) The Rosa CPU can more efficiently schedule token flow between the GPU, storage, and network, optimizing the handling of extremely complex logical decision-making tasks; 3) LP40 (LPU) aims to fundamentally address inference latency and the “Memory Wall” challenge from the microarchitecture level by integrating NVIDIA’s GPUs with Groq technology; 4) On the networking side, this platform uses the Kyber rack, which supports both copper cable and CPO expansion methods.

▍ Risk factors:

Macroeconomic fluctuations and geopolitical risks, overseas computing power leaders’ new product releases falling short of expectations, AI market demand growth falling short of expectations, continuous price increases of components such as storage, risks of technological changes and product iterations, policy regulation and data privacy risks, and intensified competition in the PCB industry.

▍ Investment strategy:

With global computing power demand continuously exceeding expectations, the prosperity and price increases in the upstream sector are expected to continue, and the inflation of the computing power chain remains the main line of the technology sector configuration with the highest certainty in terms of “prosperous growth.” We are optimistic about NVIDIA’s GTC 2026 conference further strengthening market confidence in the sustained growth of the AI industry and the realization of incremental logic.

(Source: Securities Times)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin