The NVIDIA Hopper H100 GPU will be the fastest AI chip ever

The NVIDIA Hopper H100 GPU will be the fastest AI chip ever

Based on the Hopper architecture, the H100 GPU will be NVIDIA's new "monster" for the world of servers and data centers. Made up of 80 billion transistors, H100 will have an operating frequency of 1.40-1.50 GHz. There will be 8,448 FP64 and 16,896 FP32 cores, as well as 538 Tensor Cores, but one of the most notable features will be the use of 96GB of HBM3 memory. However, users, due to ECC support and other factors, will be able to “only” access 80GB across the large 5,120-bit bus.

According to an article posted on the NVIDIA developer page , H100 has been designed with advanced EDA (Electronic Design Automation) tools with the help of Artificial Intelligence using the PrefixRL methodology. This has enabled the company to make smaller, faster, and more energy-efficient chips.

H100 has been named by the same company as the fastest AI chip in the world, with nearly 13,000 circuits designed by Artificial Intelligence, which made it possible to reduce the area by 25% compared to EDA tools. Since PrefixRL is very computationally demanding, NVIDIA has developed Raptor, a platform that makes it possible to distribute tasks between CPUs, GPUs, and Spot instances.

Arithmetic circuits were once the craft of human experts, and are now designed by AI in NVIDIA GPUs. H100 chips have nearly 13,000 AI designed circuits! How is this possible? Blog https://t.co/PpKrAmV8vc + a thread pic.twitter.com/3RrZl2muJ3

- Rajarshi Roy (@rjrshr) July 8, 2022

| ); }
NVIDIA will begin shipping the Hopper H100 GPU computes during the second half of this year and, although the price has not yet been specified, it will certainly not be. of a product within everyone's reach. Find more details about it in our previous dedicated news.