Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
NVIDIA(NVDA.US)GTC Conference Preview: Can the AI Dominance Maintain Its Reign? The Market Focuses on New Strategies in the "Post-Training Era"
As the annual NVIDIA (NVDA.US) GTC developer conference is set to kick off next week, this event, hailed as the “Annual Indicator of AI Industry Trends,” has seen its popularity and importance reach new heights this year. When CEO Jensen Huang steps into the packed ice hockey arena next Monday (local time March 16), global investors will be watching closely to see what cards he will play to respond to increasingly fierce market competition and to solidify NVIDIA’s position as a leader in artificial intelligence (AI) chips.
This four-day GTC conference is not only a stage for NVIDIA to showcase its latest advances in chips, data centers, software platform CUDA, AI agents, and robotics in the physical AI field, but also a critical test of the company’s strategic direction. After delivering quarterly earnings that exceeded expectations but failed to significantly boost the stock price, investors are eager for reassurance: NVIDIA’s strategy of reinvesting profits into the AI ecosystem is beginning to pay off.
Market research firm eMarketer analyst Jacob Bourne said, “I expect NVIDIA to present an updated full-stack roadmap from Rubin to Feynman, with a focus on reasoning, intelligent agents AI, networking technology, and AI factory infrastructure.”
“Post-Training Era” Competition Focus: Inference Chips
As the AI industry transitions from the “training” phase of large models to the “inference” phase, where AI agents perform tasks across applications, the competitive landscape is undergoing profound change. Although NVIDIA currently holds over 90% of the training and inference markets, analysts generally believe that losing market share is inevitable, especially in the inference domain.
Sid Sheth, founder and CEO of startup d-Matrix specializing in inference chips, said that while NVIDIA will remain dominant in training, “Inference is a whole different ballgame.” He added that CUDA, NVIDIA’s core software supporting most AI training and locking developers into its ecosystem, has a weaker “moat” in inference. Developers can turn to competitors outside NVIDIA because running pre-trained AI models does not require the complex programming involved in training them.
To address this trend, NVIDIA is expected to launch new products optimized for inference workloads at the conference. Reports suggest that an inference chip integrating technology from Groq, an AI startup acquired for $1.7 billion in December last year, may debut. This chip aims to provide fast and cost-effective inference computing power. Groq’s ultra-fast AI technology will be integrated into NVIDIA’s extensive CUDA ecosystem to strengthen its software moat.
Potential Threats and NVIDIA’s “Defense Fortifications”
However, challenges remain formidable. On one hand, key NVIDIA customers such as OpenAI and Meta (META.US) have begun developing their own chips, with Meta explicitly planning to release a new AI chip every six months. The rise of application-specific integrated circuits (ASIC) is seen as a long-term threat to NVIDIA’s general-purpose graphics processors (GPU), as these custom chips demonstrate higher efficiency in inference scenarios.
KinNgai Chan, Managing Director of Summit Insights Group, said that compared to a year ago, NVIDIA will undoubtedly face more intense market competition. He predicts that by 2027, as companies achieve large-scale deployment of self-developed ASIC chips, NVIDIA’s market share will decline, especially in the inference chip market.
To counter these challenges, NVIDIA is taking multiple measures to strengthen its defenses. Besides acquiring Groq, the company recently invested $2 billion each in optical communication firms Lumentum (LITE.US) and Coherent (COHR.US) to promote the application of “co-packaged optics” (CPO) technology. This technology uses light instead of electrical signals to transmit data between chips, potentially greatly improving connection efficiency and reducing power consumption in large-scale data centers. William Blair analyst Sebastien Naji expects CPO to be a key breakthrough in the next-generation Feynman chip architecture.
Bourne from eMarketer added that NVIDIA is likely to position CPO technology at GTC as a critical solution for efficiently connecting large-scale AI clusters. However, the current production scale of this technology cannot match NVIDIA’s chip shipments, and the costs and feasibility of large-scale deployment will also be key concerns for investors.
On the other hand, the role of central processing units (CPU), long dominated by Intel (INTC.US) and AMD (AMD.US), in AI tasks is rebounding. William McGonigle, analyst at Third Bridge, pointed out that with the rise of agent-based AI, the “agent orchestration layer” handled by CPUs is becoming a new performance bottleneck. Therefore, he expects NVIDIA to showcase server products that rely solely on its CPUs to respond to this emerging trend.
AI Agents and Robotics: Driving the Next Wave of Growth
Beyond hardware competition, the market is also focused on whether AI applications can sustain continuous computing power demand. Jensen Huang previously emphasized that intelligent agents AI will become the next major driver of inference demand. Sheth from d-Matrix said that as the potential of voice, video, and multimodal AI agents is gradually unlocked, this field is expected to usher in a new wave of inference computing.
Robotics technology is seen as another growth space. Daniel Newman, CEO of The Futurum Group, noted that NVIDIA reported about $6 billion in robotics-related revenue last quarter and predicted that the development timeline for humanoid robots will be very “aggressive.” This suggests that physical AI may become a reality faster than expected.
Geopolitical Risks: The Damocles Sword Hanging Over Chip Giants
Beyond technological competition, geopolitical factors are increasingly becoming key variables influencing NVIDIA’s future. As the U.S. considers further restrictions on AI chip exports and access to key markets like China becomes limited, NVIDIA’s global sales footprint is being reshaped. Reports indicate that after a complete cooling-off in the Chinese market, NVIDIA has ceased production of the H200 chip and shifted capacity to the next-generation Rubin platform.
In this context, large-scale AI infrastructure investments by Middle Eastern countries such as Saudi Arabia and the UAE are significant for NVIDIA. However, regional conflicts, energy costs, and the pace of data center construction add uncertainties to the demand in these emerging markets.