Meta platforms to utilize nvidia’s new flagship AI chip for llama model training.

Meta Platforms, the parent company of Facebook, is gearing up to incorporate Nvidia’s latest flagship artificial intelligence chip into its operations. According to a spokesperson from Meta, the initial shipments of Nvidia’s new B200 “Blackwell” chip are expected to arrive later this year, as reported by Reuters.

Nvidia unveiled the B200 chip, dubbed “Blackwell,” during its annual developer conference on Monday. This cutting-edge chip promises to deliver 30 times faster performance for tasks such as chatbot interactions. While specific details about its efficiency in handling vast datasets for training chatbots were not disclosed, Nvidia has been a frontrunner in providing the GPUs essential for powering advanced artificial intelligence work.

Colette Kress, Nvidia’s Chief Financial Officer, informed financial analysts that the company anticipates entering the market with the new GPUs later this year, with shipment volumes expected to ramp up by 2025.

Meta, being one of Nvidia’s prominent clients, has utilized hundreds of thousands of the previous generation H100 chips to bolster its content recommendation systems and generative AI products. Meta CEO Mark Zuckerberg revealed plans in January to accumulate approximately 350,000 H100 chips by year-end, with the total GPU count, including other models, reaching around 600,000.

Mark Zuckerberg stated Meta’s intention to leverage Blackwell for training its Llama models. Currently, Meta is engaged in training the third generation of the Llama model using GPU clusters, each equipped with approximately 24,000 H100 GPUs, announced last week. The company intends to persist with these clusters for Llama 3 training and employ Blackwell for subsequent generations of the model, as confirmed by a Meta spokesperson.