/
1 min read

Yotta receives India’s first-ever consignment of NVIDIA H100 – the fastest GPUs in the world – at its NM1 data center park

The NVIDIA H100 Tensor Core GPU will be used to power the world’s 10th fastest supercomputer, Shakti Cloud, giving a huge fillip to powering India’s AI dreams

Yotta Data Services, an end-to-end digital transformation service provider, has announced the arrival of the world’s fastest GPUs – NVIDIA H100 Tensor Core GPUs – at its NM1 data center. With this, we reach one step closer to revolutionizing AI development in India and the world. 

The first cluster comprises of more than 4000 GPUs, and was received by Sunil Gupta, Co-founder, MD & CEO, Yotta Data Services on 14th of March. This delivery further strengthens Yotta’s positioning as NVIDIA’s first Network Cloud Partner in India, and globally, as an Elite Partner, as it envisions to scale up its GPU stable to 32,768 by the end of 2025.

With the reception of the NVIDIA H100 GPUs, Yotta aims to double down on its vision of democratizing access to GPU resources, fostering innovation and competitiveness across various sectors. The Shakti Cloud platform will also include foundational AI models & applications that will help Indian enterprises create powerful AI tools and products, and significantly reduce the time-to-value of AI powered products.

Commenting on the announcement, Sunil Gupta, Co-founder, MD & CEO, Yotta Data Services, said, “We at Yotta are proud to be at the heart of the AI revolution in India. The delivery of the NVIDIA H100 marks the beginning of a new chapter, not just for Yotta, but for a truly AI-powered digital Bharat. With access to the world’s most powerful hardware right here on Indian soil, Yotta will help Indian businesses, governments, startups, and researchers accelerate innovation, drive growth and efficiency, and help achieve excellence as we scale greater heights in the AI revolution.”

Based on the Hopper architecture, NVIDIA’s H100 GPU has been specifically designed keeping AI applications in mind. It holds 80 billion transistors – 6 times more than its predecessor, the A100, which enables it to process large amounts of data instantly. The GPU is perfect for training Large Language Models (LLMs) that power instant content creation, translation, medical diagnoses, among other AI applications. 

Leave a Reply