Nvidia is preparing to launch a new version of its Nvidia A100 GPU in PCI-Express 4.0 format that double the amount of VRAM memory in comparison with its original variant. With the GA100 @ 7nm silicon, this Nvidia A100 PCIe 4.0 variant will be accompanied by no less than 80 GB of HBM2E memory coupled with a 5120-bit memory interface, gives us a bandwidth of 2039 GB / s (vs 1555 GB / s of the original model), while its maximum TDP will be up to 400W.
At the silicon level, it is completely identical, so in a size of 826mm2, we find no less than 54,000 million transistors, 6912 CUDA Cores @ 1410 MHz and 432 Tensor Cores. Despite the great show of muscle, this is not the complete silicon, so big boy from NVIDIA can still come. In terms of theoretical numbers, the performance of this GPU this translates to an FP32 throughput of 19.5 TFLOPs , an FP64 throughput of 9.7 TFLOPs, and an FP64 Tensor Core throughput of 19.5 TFLOPs.
For now it is unknown when this GPU will be launched on the market, although of course, the most interesting thing will be to know the cost of doubling the VRAM memory. This model also makes it clear that we will not see Ampere’s successor in the short term, either in the gaming or professional markets.