Google-Maiming challenges Nvidia by launching a challenge through an artificial intelligence semiconductor collaboration

robot
Abstract generation in progress

Google is currently discussing the joint development of two artificial intelligence-specific semiconductors with Mavenir Technologies, with the intention of strengthening its own chip design capabilities to formally challenge the dominance of NVIDIA in the AI semiconductor market—a trend that is becoming increasingly evident.

Reuters, citing the U.S. tech media The Information on the 19th, reported that Google and Mavenir are negotiating development plans for two chips. One is a memory processing device that works in conjunction with Google’s Tensor Processing Unit (TPU, a semiconductor designed specifically for AI computations), and the other is a new TPU optimized for driving AI model performance. Given Google’s long-term trend of continuously enhancing data center-specific self-developed semiconductors, this move can be interpreted as an effort to expand their design scope from improving computational performance to reducing memory bottlenecks.

Memory processing devices are considered a critical component in the AI semiconductor competition. Large-scale language models or generative AI require rapid access to and processing of vast amounts of data, and relying solely on computational power cannot improve overall performance. Therefore, designing efficient data transfer between processing chips and memory has become a core competitive factor. It is reported that both companies aim to complete the design of this device and enter trial production by 2027.

The significance of this discussion lies in the fact that it occurs within the competitive landscape where Google’s TPU is challenging NVIDIA’s Graphics Processing Units (GPUs, originally designed for graphics rendering but now widely used for AI training and inference). Currently, the AI semiconductor market is largely dominated by NVIDIA, but major tech companies are accelerating their in-house chip development to reduce reliance on external supply chains and control costs. Google also needs to further enhance its autonomous design capabilities to ensure stable access to the computing resources required for its search, cloud computing, and generative AI services.

Mavenir’s recent moves are also noteworthy. The company announced on the 31st of last month that it had established a strategic partnership with NVIDIA and received a $2 billion investment from NVIDIA. The two parties plan to leverage NVIDIA’s NVLink Fusion technology to integrate Mavenir into NVIDIA’s AI factories and AI-RAN (Artificial Intelligence Radio Access Network) ecosystem, and will collaborate in the field of silicon photonics (a technology that uses light instead of electrical signals to improve data transmission efficiency). Additionally, last month, Mavenir completed the acquisition of Celestial AI, a company with optical interconnect technology, for $3.3 billion. This is interpreted as an investment aimed at acquiring next-generation technology to enhance chip-to-memory connection speeds.

Ultimately, the collaboration discussions between Google and Mavenir go beyond mere new product development, reflecting an expanding competition over the core infrastructure of the AI era—semiconductor dominance. In the future, corporate competitiveness will likely depend not only on the performance of processing chips but also on comprehensive design capabilities covering memory processing, inter-chip interconnects, and optical communication technologies. This trend may accelerate the in-house chip development processes of major tech companies and foster strategic collaborations among semiconductor firms.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin