NVIDIA today announced the launch of its new Nvidia HGX AI supercomputing platform, which will be powered by the new Nvidia A100 graphics card with 80 GB of HBM2E memory to achieve a bandwidth of 2039 GB / s (+ 25% vs Nvidia A100 40GB) with a TDP of 250W compared to the 400W of the solutions in SXM2 format (on the right side of below image).

Nvidia HGX AI with 2x Nvidia A100 80GB PCIe vs 8x Nvidia A100 40GB in SXM2

Along with these new graphics cards, we have Nvidia NDR 400G InfiniBand networking technologies (64 ports @ 400 Gb / s) and Nvidia Magnum IO GPUDirect Storage software (allows direct access to memory between GPU memory and storage).

According to NVIDIA:

To accelerate the new era of AI and industrial HPC, Nvidia has added three key technologies to its HGX platform: the Nvidia A100 80GB PCIe GPU , the Nvidia NDR 400G InfiniBand network, and the Nvidia Magnum IO GPUDirect Storage storage software. Together, they provide extreme performance to enable innovation in industrial HPC.

“The HPC revolution started in academia and is rapidly spreading to a wide range of industries,” said Jensen Huang, Founder and CEO of Nvidia.

“Key dynamics are driving super-exponential advances in Moore’s law that have made HPC a useful tool for industries. Nvidia’s HGX platform provides researchers with unprecedented high-performance computing acceleration to tackle the toughest problems they face. industries face. “

Previous articleLisa Su: AMD’s success lies in not squeezing product like toothpaste
Next articleSteam Carried Out Mass Rollback To User Accounts Buying Games In Cheaper Regions
Hardware enthusiast, Gamer, Writer. I enjoy picking up games, putting them back down, and then writing about it.