Connect with us

Hi, what are you looking for?
Nvidias New Blackwell AI GPUs are 7x Faster and 4x More Efficient
Nvidias New Blackwell AI GPUs are 7x Faster and 4x More Efficient


Nvidia’s New Blackwell AI GPUs are 7x Faster and 4x More Efficient

Nvidia’s New Blackwell AI GPUs are 7x Faster and 4x More Efficient

Nvidia has revealed its newest line of graphics processing units (GPUs) designed for AI training, known as the Blackwell series. These state-of-the-art GPUs boast an impressive 25-fold increase in energy efficiency, promising lower expenses for AI processing operations.

Leading the way in this launch is the Nvidia GB200 Grace Blackwell Superchip, featuring a multi-chip design. This fresh architecture promises significant improvements in performance, delivering up to 30 times better performance for LLM inference workloads compared to earlier models.


Speaking to a large crowd of engineers at Nvidia GTC 2024, Nvidia CEO Jensen Huang introduced the Blackwell series, describing it as the beginning of a revolutionary era in computing. Although gaming products are anticipated in the future, Huang emphasized the technological progress fueling this next generation.

During a light-hearted moment in the keynote, Huang jokingly commented on the immense value of the prototypes he was holding, playfully suggesting they were worth $10 billion and $5 billion each.


Nvidia asserts that Blackwell-powered supercomputers will empower organizations worldwide to deploy large language models (LLMs) containing tens of trillions of parameters, enabling real-time generative AI processing while cutting costs and power usage by 25 times. This upgraded processing capacity will effortlessly expand to support AI models with up to 10 trillion parameters, marking the advent of a more efficient and productive era in AI-driven operations.


Apart from generative AI processing, the newly introduced Blackwell AI GPUs, paying homage to mathematician David Harold Blackwell, will find applications in data processing, engineering simulations, electronic design automation, computer-aided drug design, and quantum computing.

The technical details encompass 208 billion transistors and TSMC’s 4NP manufacturing process, characterized by a custom-made, two-reticle limit design that delivers substantial processing capability.

NVLink enables two-way data transfer at a rate of 1.8TB/s per GPU, facilitating seamless high-speed communication among a network of up to 576 GPUs, perfectly suited for managing complex Large Language Models (LLMs) prevalent today.

Advertisement. Scroll to continue reading.

At the core of this breakthrough is the NVIDIA GB200 NVL72, a rack-scale system crafted to provide an astounding 1.4 exaflops of AI capability and furnished with 30TB of rapid memory.

The GB200 Superchip marks a notable progression, delivering a performance boost of up to 30 times compared to the Nvidia H100 Tensor Core GPU for LLM inference tasks, while also significantly reducing costs and energy usage by up to 25 times simultaneously.

Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro are ready to unveil a variety of servers featuring Blackwell products. Moreover, Aivres, ASRock Rack, ASUS, Eviden, Foxconn, Gigabyte, Inventec, Pegatron, QCT, Wistron, Wiwynn, and ZT Systems are also anticipated to join in offering servers equipped with Blackwell-based technology.

Expected to be adopted by leading cloud providers, server manufacturers, and prominent AI firms like Amazon, Google, Meta, Microsoft, and OpenAI, the Blackwell platform is poised to lead a fundamental change in computing across multiple sectors.


You May Also Like

Earn Money

Stock Market Basics: A Beginner’s Guide to Investing in Stocks Introduction Investing in the stock market can seem daunting for beginners, but understanding the...


Best Multi-Room Tents for Camping Camping is one of the best ways to connect with nature and spend quality time with family and friends....