NVIDIA announced today the latest progress on the highly anticipated Rubin data center chip: six chips have been delivered from manufacturing partners and have passed key tests. This indicates that the new product is progressing as planned and is expected to be launched in 2026, with Microsoft and other major cloud service providers leading the adoption in the second half of the year. Data shows that Rubin has achieved dual breakthroughs in performance and cost, which is significant for the upgrade of AI infrastructure.
Rubin vs Blackwell: Dual Advantages in Performance and Cost
The core data released by NVIDIA demonstrates significant improvements of Rubin compared to the previous Blackwell chips:
Metric
Improvement
Training Performance
3.5x increase
AI Software Runtime Speed
5x increase
System Cost
Lower (using fewer components)
This comparison is quite interesting. Usually, performance improvements are accompanied by increased costs, but Rubin breaks this norm — it not only significantly boosts performance but also reduces overall system costs by optimizing architectural design and using fewer components to achieve the same results. What does this mean for cloud service providers? The improved cost efficiency directly translates into stronger competitiveness.
Why This Release Is Critical
From the details in the news, NVIDIA emphasizes that the Rubin chips “have been delivered from manufacturing partners and have passed several key tests.” This is not just marketing talk but real technical validation. Compared to mere product planning or renderings, the delivered silicon and successful tests indicate that the product has entered the customer deployment stage — a crucial step from concept to commercial use.
As one of the world’s largest cloud service providers, Microsoft plans to adopt Rubin in the second half of the year. What does this reflect? It confirms the product’s maturity and indicates the continued growth in demand for AI computing power. Once major cloud providers like Microsoft deploy at scale, industry-wide adoption will follow rapidly.
Broader Industry Context
According to related news, NVIDIA’s market value has surpassed $5 trillion, chip stocks are rallying across global markets, and the semiconductor sector is leading the US stock market gains. These data reflect a consensus: the capitalization of computing power and the infrastructure of AI have become global consensus.
The launch of Rubin is precisely at this accelerating stage of the trend. As generative AI applications continue to land, the demand for training and inference computing power is growing exponentially. More efficient and cheaper chips mean more companies can deploy AI systems, which further expands the demand for computing power — creating a positive feedback loop.
Short-term Outlook
According to NVIDIA’s schedule, Rubin will be launched in 2026, with Microsoft and other cloud providers starting deployment in the second half of the year. This means:
First half of the year: products will be gradually delivered to early customers for large-scale testing and optimization
Second half of the year: leading cloud providers will begin large-scale deployment, generating visible revenue contributions
Entire second half of 2026 to 2027: Rubin will gradually become the standard for the next-generation AI infrastructure
Summary
The on-schedule progress of Rubin chips represents NVIDIA’s stable pace in AI chip iteration. The performance boost of 3.5x to 5x combined with cost advantages makes it highly attractive to cloud service providers. The large-scale adoption by giants like Microsoft and Google indicates that the AI computing market will enter a new expansion phase. For the entire industry, this is not just the release of a new chip but a new starting point for upgrading AI infrastructure.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Nvidia Rubin chip performance skyrockets 5 times, with lower costs, and will be adopted by cloud giants like Microsoft in the second half of the year
NVIDIA announced today the latest progress on the highly anticipated Rubin data center chip: six chips have been delivered from manufacturing partners and have passed key tests. This indicates that the new product is progressing as planned and is expected to be launched in 2026, with Microsoft and other major cloud service providers leading the adoption in the second half of the year. Data shows that Rubin has achieved dual breakthroughs in performance and cost, which is significant for the upgrade of AI infrastructure.
Rubin vs Blackwell: Dual Advantages in Performance and Cost
The core data released by NVIDIA demonstrates significant improvements of Rubin compared to the previous Blackwell chips:
This comparison is quite interesting. Usually, performance improvements are accompanied by increased costs, but Rubin breaks this norm — it not only significantly boosts performance but also reduces overall system costs by optimizing architectural design and using fewer components to achieve the same results. What does this mean for cloud service providers? The improved cost efficiency directly translates into stronger competitiveness.
Why This Release Is Critical
From the details in the news, NVIDIA emphasizes that the Rubin chips “have been delivered from manufacturing partners and have passed several key tests.” This is not just marketing talk but real technical validation. Compared to mere product planning or renderings, the delivered silicon and successful tests indicate that the product has entered the customer deployment stage — a crucial step from concept to commercial use.
As one of the world’s largest cloud service providers, Microsoft plans to adopt Rubin in the second half of the year. What does this reflect? It confirms the product’s maturity and indicates the continued growth in demand for AI computing power. Once major cloud providers like Microsoft deploy at scale, industry-wide adoption will follow rapidly.
Broader Industry Context
According to related news, NVIDIA’s market value has surpassed $5 trillion, chip stocks are rallying across global markets, and the semiconductor sector is leading the US stock market gains. These data reflect a consensus: the capitalization of computing power and the infrastructure of AI have become global consensus.
The launch of Rubin is precisely at this accelerating stage of the trend. As generative AI applications continue to land, the demand for training and inference computing power is growing exponentially. More efficient and cheaper chips mean more companies can deploy AI systems, which further expands the demand for computing power — creating a positive feedback loop.
Short-term Outlook
According to NVIDIA’s schedule, Rubin will be launched in 2026, with Microsoft and other cloud providers starting deployment in the second half of the year. This means:
Summary
The on-schedule progress of Rubin chips represents NVIDIA’s stable pace in AI chip iteration. The performance boost of 3.5x to 5x combined with cost advantages makes it highly attractive to cloud service providers. The large-scale adoption by giants like Microsoft and Google indicates that the AI computing market will enter a new expansion phase. For the entire industry, this is not just the release of a new chip but a new starting point for upgrading AI infrastructure.