NVIDIA announced Tesla P100 GPU yesterday during the presentation of CEO of NVIDIA, Jen-Hsun Huang at GTC 2016. The tech giant is already working on the new Pascal based Tesla P100, built on 16nm FinFET, with GPU Die size of 610mm2, utilizing High-Bandwidth Memory 2 (HBM2).

As reported, the company looking to manufacture for their first hand market i.e, Supercomputers and HPC and starts shipment in June 2016 in the U.S. while the OEM availability of Tesla P100 will begin in first quarter of 2017. 

It seems that the first priority of NVIDIA is HPC and Supercomputer market due to the fact that yesterday during GTC presentation, company’s CEO briefly discussed NVIDIA DGX-1, which it describes as “World’s first Deep Learning Supercomputer“. But considering the market growth of the company, the main source of revenue is the gaming industry.

Pascal is new-gen GPU architecture after successful launch of Maxwell, one of the most power efficient at the moment. But according to the company, Pascal is 70% faster than the current Maxwell in Deep learning and will offer twice of the performance per watt as compared with Maxwell, working of 16nm FinFET and HBM2 memory. The Tesla P100 features 16 GB of HBM2 Memory, 4096-bit HBM2 Memory Bus3584 CUDA Cores, 224 Texture Units, 5.30 TFLOPs Compute and Base clock of 1328 MHz. 

Here are the specs of Tesla P100 Specs, and comparison with Tesla series GPUs:

NVIDIA Tesla Graphics CardTesla K40Tesla M40Tesla P100
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)
Process Node28nm28nm16nm
Transistors7.1 Billion8 Billion15.3 Billion
GPU Die Size551 mm2601 mm2610 mm2
SMs152456
CUDA Cores Per SM19212864
CUDA Cores (Total)288030723584
FP64 CUDA Cores / SM64432
FP64 CUDA Cores / GPU960961792
Base Clock745 MHz948 MHz1328 MHz
Boost Clock875 MHz1114 MHz1480 MHz
FP64 Compute1.68 TFLOPs0.2 TFLOPs5.30 TFLOPs
Texture Units240192224
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM2
Memory Size12 GB GDDR524 GB GDDR516 / 32 GB HBM2
L2 Cache Size1536 KB3072 KB4096 KB
TDP235W250W300W

Via: WCCFTech