//
1 min read

Qualcomm and Nvidia are battling it out for supremacy in AI Chip efficiency tests!

Qualcomm’s AI chips have outperformed Nvidia’s in two of three measures of power efficiency, according to new test data.

While Nvidia has long dominated the market for training AI models with large amounts of data, the inference market – which sees trained models perform tasks like generating text and recognising images – is expected to grow rapidly as more businesses invest in AI technologies.

The news comes as Alphabet’s Google and other companies look for ways to contain the costs of incorporating these technologies into their products. Analysts believe the market for data center inference chips is poised for significant expansion.

Qualcomm’s Cloud AI 100 chip has outperformed Nvidia’s H100 chip in two out of three measures of power efficiency, according to testing data published by engineering consortium MLCommons. While Nvidia is known for its dominance in the AI training market, Qualcomm has turned its attention to creating chips that prioritize power consumption, drawing on its experience designing chips for battery-powered devices such as smartphones.

In the published testing data, Qualcomm’s chips carried out more server queries per watt than Nvidia’s flagship H100 chip in both image classification and object detection. This is good news for companies looking to integrate AI technologies into their products, as power consumption is a major cost to consider.

Qualcomm’s chips recorded an impressive 227.4 server queries per watt, while Nvidia’s performance was lower at 108.4 queries per watt. Qualcomm also scored better than Nvidia in object detection, achieving 3.8 queries per watt compared to Nvidia’s 2.4 queries per watt.

With businesses exploring ways to keep extra costs low, Qualcomm’s focus on power efficiency could help them gain an edge in the growing market for data center inference chips.

In the latest test of artificial intelligence chips from Qualcomm and Nvidia, the latter company came out on top in the natural language processing category. This is an important area of AI, as it’s widely used in systems like chatbots. Nvidia achieved an impressive 10.8 queries per watt, while Qualcomm ranked second with 8.9 queries per watt.

Leave a Reply