Cloud native EDA tools & pre-optimized hardware platforms
No single GPU, XPU, or other AI accelerator can support the computational demand of AI workloads. Tens of thousands — and, in the near future, hundreds of thousands — must work together to share the processing load.
Llama3, for example, will need more than 700TB of memory and 16,000 accelerators for pre-training alone. And, like other AI models, its processing parameters are expected to double every four to six months.
This mass-scale parallel processing and continuous growth put a tremendous strain on the network fabrics that bring together AI clusters, and, more specifically, the interconnects that transport data between all of the accelerators within them.
Emerging standards like Ultra Ethernet and Ultra Accelerator Link (UALink) are addressing the need for larger AI clusters with higher-bandwidth, lower-latency interconnects. And the industry’s first Ultra Ethernet and UALink IP solutions, which we recently announced, will enable massive AI clusters to be scaled both out and up.
To support the growing computational demand of modern workloads, AI clusters must be scaled both out (via network fabrics) and up (within the rack).
Ultra Ethernet addresses the former, providing high-performance, vendor-agnostic links for connecting up to a million nodes within a massive AI network. And UALink addresses the latter, providing high-speed, low-latency links for connecting more than a thousand AI accelerators.
These open, industry-standard protocols make it possible to expand processing performance and scale without vendor lock-in, giving flexibility and investment protection to those building and modernizing hyperscale data centers and high-performance computing (HPC) environments.
As an active member of the Ultra Ethernet Consortium (UEC) and UALink Consortium (UAC), we are helping shape and drive these emerging standards that will spur the next generation of AI and HPC architectures.
Synopsys is addressing the need for high-bandwidth, low-latency interconnects with the introduction of Ultra Ethernet IP and UALink IP solutions, which provide the interfaces needed to scale today’s and tomorrow’s AI and HPC architectures.
Built on silicon-proven technology, our Ultra Ethernet IP solution will enable a blistering bandwidth of 1.6 Tbps with ultra-low latency for scaling (out) massive AI networks. And our UALink IP solution will deliver up to 200 Gbps per lane and memory sharing capabilities to scale (up) accelerator connectivity.
Based on our industry-leading, best-in-class Ethernet and PCIe IP — which have helped deliver more than 5,000 successful tapeouts — both solutions reduce risk and accelerate time-to-market for those developing next-gen semiconductors, systems-on-chip (SoCs), and AI accelerators with Ultra Ethernet and UALink interconnects.
Synopsys is at the forefront of AI and HPC design innovation, offering the industry’s broadest portfolio of high-speed interface IP. With complete, secure IP solutions for PCIe 7.0, 1.6T Ethernet, CXL, HBM, UCIe, and now Ultra Ethernet and UALink, we are enabling new levels of AI and HPC performance, scalability, efficiency, and interoperability.