On February 27, Indian data center operator Yotta Data Services announced a $2 billion investment to build an AI center based on Nvidia GPUs and plans to advance its IPO process within the next 12 months. The company stated that as domestic large-scale model training accelerates and user numbers surge, the demand for GPUs in India has outstripped supply.
Yotta co-founder and CEO Sunil Gupta said that since 2023, the company has been purchasing Nvidia chips on a large scale, currently controlling about 60% to 70% of India’s GPU computing resources. Many local models released at the recent Indian AI summit are mostly trained in Yotta data centers. For example, Sarvam AI’s Indus chatbot has adopted a phased rollout strategy due to limited computing power.
Globally, OpenAI, Google, and Microsoft are accelerating their presence in the Indian market. OpenAI has signed a 100-megawatt capacity agreement with Tata Consultancy Services’ data center business and reserved the option to expand to 1 gigawatt. Sam Altman previously stated that the “OpenAI for India” initiative will strengthen local infrastructure and collaboration ecosystems.
Investment bank Nomura forecasts that India’s data center capacity will grow from 1.93 gigawatts in 2025 to 4 gigawatts in 2028. Several companies have announced plans to invest up to $277 billion in AI infrastructure in India over the next five to seven years.
Yotta plans to expand its GPU inventory through pre-IPO financing of $1.2 to $1.5 billion. Against the backdrop of intensifying global computing power competition, India is accelerating the development of its domestic AI computing network, with data centers and high-end chips becoming key drivers of industry upgrading.