- Nvidia’s next-gen flagship GB202 GPU could use a 512-bit memory bus.
- According to leaks, these graphics cards will utilize GDDR7 memory.
- A reliable insider states that the memory configuration is not too different from the last generation.
Information about what to expect from its GeForce RTX 50 series GPU launch is now slowly making its way to the public. A reliable source states that the GB202, expected to be used in the GeForce RTX 5090, will be based on a 512-bit bus.
This would translate to 24GB of 28Gbps GDDR7 memory.
I think my persistence is correct. So the difference is that GB202 is 512-bit and AD102 is 384-bit.
— kopite7kimi (@kopite7kimi) March 11, 2024
GPU leaker Kopite7kimi on X speculates that Nvidia’s next generation of RTX 50-series “Blackwell” GB203 and new GB205 dies will have memory bus widths identical to those of Nvidia’s exiting RTX 40-series AD103 and AD104 GPU dies
These are found in some of the best graphics cards, such as the RTX 4080 Super and the RTX 4070 Super. The VRAM memory interface on NVIDIA’s Blackwell GPUs will also skip the 384-bit bus, according to kopite7kimi.
The Blackwell GPUs will probably use a 192-bit and 256-bit bus for GB205 and GB203, respectively. The GB202, on the other hand, should come with a 512-bit wide memory bus. This will likely be the GeForce RTX 4090 successor.
This setup would be a step up over the GeForce RTX 4090, even if the memory itself ends up identical. The use of GDDR7 should also enable the GeForce RTX 5090 to outperform its predecessor, thanks to the new innovations in memory.
Based on these leaks, the next generation could come with the following configurations:
- GB202 (RTX 5090) 512-bit bus, 24GB memory, 1536 GB/s bandwidth
- GB203 384-bit bus, 16GB memory, 1024 GB/s bandwidth
- GB204 256-bit bus, 12GB memory, 768 GB/s bandwidth
With the release of the RTX 50-series, Nvidia is expected to switch to the GDDR7 graphics memory standard. Nvidia will be able to significantly increase bandwidth with GDDR7 while simultaneously reducing or reusing its outgoing memory interfaces on its next-generation GPUs.
The reason why memory interface width on both Nvidia and AMD GPUs has shrunk recently is because of faster memory coupled with larger L2/L3 cache capacity. Newer memory technology, combined with larger caches, will allow Nvidia to increase effective memory bandwidth while decreasing bus width.
Industry insiders are already speculating about the unveiling of Blackwell GPUs targeted at high-performance computing (HPC) at the GTC 2024 event next week. Meanwhile, a formal gaming family announcement could arrive many months later.
Thank you! Please share your positive feedback. 🔋
How could we improve this post? Please Help us. 😔
[News Reporter]
Malik Usman is student of Computer Science focused on using his knowledge to produce detailed and informative articles covering the latest findings from the tech industry. His expertise allows him to cover subjects like processors, graphics cards, and more. In addition to the latest hardware, Malik can be found writing about the gaming industry from time to time. He is fond of games like God of War, and his work has been mentioned on websites like Whatculture, VG247, IGN, and Eurogamer.