Cloud native EDA tools & pre-optimized hardware platforms
Modern computer applications rely heavily on graphics processing and rendering which involve a lot of simultaneous mathematical calculations. A typical CPU is not suitable for jobs that require simultaneous processing, which is why the concept of a dedicated Graphics Processing Unit (GPU) was introduced. The GPU has found its scope not only in graphics processing but also several emerging applications like AI, machine learning, VR, autonomous driving, and network routing.
GPU’s require memory which can offer much higher throughput than conventional memories like DDR, since it processes massive chunks of data all at once. The memory must also be capable of providing minimal latency, along with the possibility of simultaneous write/read. As a result, Graphics Double Data Rate (GDDR) memory, a dedicated type of SGRAM for the GPU, came into the picture.
The GDDR5 standard stayed relevant for around 10 years, thanks to the incremental performance enhancements on a regular basis, however, these updates were not sustainable due to meager power- to- performance ratio gains. Consequently, GDDR6, a new memory standard with a revamped clocking and data burst architecture was introduced.
The performance of a memory indirectly governs the overall performance of a system it is used in. Accessing a memory imposes cost in terms of time, and as result memory speed is the real bottleneck in the overall performance metrics. As we progress further towards denser and more tightly packed memories, power budget is a factor that cannot be ignored. GDDR6 has been optimized to deliver peak performance when needed, and to draw minimum power when idle. The following illustration sheds light on a number of distinguishing features contributing to the overall dominance of the GDDR6 memory in the high performance segment.
Up until now, GDDR was primarily associated with the graphics domain. Although it started out as a standard to support graphical processing, it is now leaching into other domains as well. The gradual boost in the performance of the GDDR memory makes it suitable for applications demanding raw performance.
Another high-performance memory standard, HBM, which follows the principle of parallelism, may be a direct competitor in such applications. However, the shear amount of development and manufacturing efforts required in HBM production lead to increased price. Furthermore, HBM is a niche standard whereas GDDR is a well-established standard in the memory industry. In terms of system integration, GDDR has the upper hand over HBM.
Next generation technologies such as automated vehicles, machine learning, artificial intelligence, augmented reality, etc., need to process tremendous amount of data at once. Automated driving is a crucial application where performance is always the priority. Even a tiny delay or lag in terms of computation could prove catastrophic. GDDR6 memory, being performance focused, could prove to be a benefit in such applications.
Recognizing the potential that GDDR6 carries, Synopsys has been at the forefront when it comes to GDDR6 memory verification. Early adopters of the GDDR6 memory are already engaged with Synopsys and have been deploying VIP for GDDR6 rapidly.
Keep an eye out for more in-depth blogs on GDDR6 still to come. For more information on Synopsys memory VIP, please visit http://synopsys.com/vip
In the meantime, read our recent announcements and blogs on next generation memory technologies:
• Industry’s First DDR5 NVDIMM-P Verification IP for Next-generation Storage-class Memory Designs
• Industry’s First LPDDR5 IP & VIP Solution Extending Leadership in DDR5/LPDDR5
• LPDDR5: Meeting Power, Performance, Bandwidth, and Reliability Requirements of AI, IoT and Automotive
• DDR5/4/3/2: How Memory Density and Speed Increased with each Generation of DDR
• How DFI 5.0 Ensures Higher Performance in DDR5/LPDDR5 Systems?