Why OpenAI, Meta, Google and more depend on GPUs to create powerful AI bots like ChatGPT, Gemini (Easy Explainer)

newyhub
5 Min Read


It would be an understatement to say that AI models have taken the world by storm. They are everywhere—in your phone, the car you drive, the washing machine you use, and even in the video games you play. It all started with the launch of ChatGPT by OpenAI, and since then, multiple other big tech companies, including Meta, Google, and Anthropic, have launched their own AI models and subsequently, bots based on them. But why have only select companies been able to create these AI bots? Is it only a research thing? The answer is no!

AI companies need large capital, and they also need access to massive computing power to train their respective models. Now, where does the “compute” come from? Graphics Processing Units, or GPUs, as you know them. Yes, the same hardware that you may use to power some intensive video games on your PC.

This is exactly why NVIDIA has witnessed exponential growth, making it the number one company by market cap in the world, even beating Apple and Microsoft. It continues to trade places and comfortably sits in the top five now.

So, why do AI companies need GPUs? Let’s answer this here in this explainer.

Also Read: Motorola Edge 50 AI features: Know what advanced features smartphone has to offer

GPUs And AI: What’s The Connection?

The short answer: GPUs are just better at calculating and have better energy efficiency compared to CPUs, and over time, the performance for the money you pay has gradually increased—making them the ideal choice.

The long answer: NVIDIA, which makes GPUs for the likes of OpenAI, Meta, Oracle, Tesla, and more, says that current GPUs can do parallel processing, scale it to supercomputing levels, and the overall software scope for AI is quite broad.

And this is exactly why market leaders like OpenAI trained their Large Language Models using thousands of NVIDIA GPUs. To put it simply, these AI models, at their core, are neural networks, as per NVIDIA, and are made using layers and layers of equations, with one data piece being related to another. This is where the massive compute that GPUs have comes into play. GPUs, with their thousands of cores (think of them as calculators), “solve” the complex layers that largely make an AI model.

This very ability to simultaneously handle so many layers in real-time is what makes GPUs ideal for AI training. Think of GPUs as a big ice shaver—a means to shave a big block of ice (AI model) into millions of tiny crystals, and make it consumable for you to eat.

Also Read: Japanese toilets in India: TOTO washlet starting price, features and all details to know

AI Tech Giants Are Busy Buying Powerful GPUs

It was only recently that NVIDIA supplied its H200 GPUs to OpenAI for the purported development of GPT-5, and eventually, to create AGI or Artificial General Intelligence. H200 GPUs are considered to be the fastest GPUs, and NVIDIA CEO, Jensen Huang, himself “hand-delivered” the first-ever DGX-H200 in the world to OpenAI, earlier this year, in April.

More recently, Jensen Huang and Mark Zuckerberg shared the stage for a livestream where they discussed a wide range of AI topics, including how open-source AI models make AI accessible to millions. Plus, Zuckerberg even went on to say that in the future everyone will have an AI model of themselves. But, keeping the conversation at bay, Meta has been a big customer for NVIDIA, and has strong relations with it. The company reportedly purchased about 3,50,000 NVIDIA H100 GPUs for its ambitions with Meta AI and Llama. By the end of 2024, Meta is expected to have a total of 6,00,000 GPUs.

Also Read: iPhone users to soon get Telegram-like animated emojis for WhatsApp, here’s everything we know

//
Share This Article
Leave a comment