Nvidia HGX H200: Nvidia Upgrades Top-of-the-Line Chip for AI Work


Nvidia, the leading graphics processing unit (GPU) manufacturer, has revealed the details of its latest high-performance chip for AI work, the HGX H200. This new GPU builds on the success of its predecessor, the H100, introducing significant upgrades in memory bandwidth and capacity to enhance its capability in handling intensive generative AI work.

What’s the difference between HGX H200 and H100?

The HGX H200 features 1.4 times more memory bandwidth and 1.8 times more memory capacity than the H100, making it a notable advancement in the AI computing landscape. The key improvement lies in the adoption of a new, faster memory specification called HBM3e, elevating the GPU’s memory bandwidth to an impressive 4.8 terabytes per second and increasing its total memory capacity to 141GB.

The introduction of faster and more extensive High Bandwidth Memory (HBM) aims to accelerate performance across computationally demanding tasks, particularly benefiting generative AI models and high-performance computing applications. Ian Buck, Nvidia’s VP of high-performance computing products, highlighted these advancements in a video presentation.

Nvidia H200 chip

Despite the technological strides, the looming question revolves around the availability of the new chips. Nvidia acknowledges the supply constraints faced by its predecessor, the H100, and aims to release the first H200 chips in the second quarter of 2024. Nvidia is collaborating with global system manufacturers and cloud service providers to ensure availability, but specific production numbers remain undisclosed.

The H200 maintains compatibility with systems supporting H100s, offering a seamless transition for cloud providers. Major players like Amazon, Google, Microsoft, and Oracle are among the first to integrate the new GPUs into their offerings in the coming year.

While Nvidia refrains from disclosing the pricing of the H200, its predecessor, the H100, is estimated to range from $25,000 to $40,000 each. The demand for these high-performance chips remains fervent, with AI companies actively seeking them for efficient data processing in training generative image tools and large language models.

The H200’s unveiling aligns with Nvidia’s efforts to meet the escalating demand for its GPUs. The company plans to triple the production of H100 in 2024, aiming to produce up to 2 million units, as reported in August. As the AI landscape continues to evolve, the introduction of the H200 promises enhanced capabilities, setting the stage for a more promising year for GPU enthusiasts and AI developers alike.

You May Watch the Nvidia announcement:

You can follow Smartprix on TwitterFacebookInstagram, and Google News. Visit smartprix.com for the most recent newsreviews, and tech guides.



We will be happy to hear your thoughts

Leave a reply

Funtechnow
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart