Beaverton, OR – April 8, 2025 – The UALink Consortium at present introduced the ratification of the UALink 200G 1.0 Specification, which defines a low-latency, high-bandwidth interconnect for communication between accelerators and switches in AI computing pods.
The UALink 1.0 Specification permits 200G per lane scale-up connection for as much as 1,024 accelerators inside an AI computing pod, delivering the open commonplace interconnect for next-generation AI cluster efficiency.
“Because the demand for AI compute grows, we’re delighted to ship an important, open business commonplace expertise that allows next-generation AI/ML purposes to the market,” mentioned Kurtis Bowman, UALink Consortium Board Chair. “UALink is the one reminiscence semantic answer for scale-up AI optimized for decrease energy, latency and price whereas rising efficient bandwidth. The groundbreaking efficiency made doable with the UALink 200G 1.0 Specification will revolutionize how Cloud Service Suppliers, System OEMs, and IP/Silicon Suppliers method AI workloads.”
UALink creates a swap ecosystem for accelerators – supporting important efficiency for rising AI and HPC workloads. It permits accelerator-to-accelerator communication throughout system nodes utilizing learn, write, and atomic transactions and defines a set of protocols and interfaces enabling the creation of multi-node programs for AI purposes.
Options:
- Efficiency
- Low-latency, high-bandwidth interconnect for tons of of accelerators in a pod.
- Offers a easy load/retailer protocol with the identical uncooked velocity as Ethernet with the latency of PCIe® switches.
- Designed for deterministic efficiency reaching 93% efficient peak bandwidth.
- Energy
- Allows a extremely environment friendly swap design that reduces energy and complexity.
- Price
- Makes use of considerably smaller die space for hyperlink stack, reducing energy and acquisition prices, leading to decreased Whole Price of Possession (TCO).
- Elevated bandwidth effectivity additional permits decrease TCO.
- Open
- A number of distributors are growing UALink accelerators and switches.
- Harnesses member firm innovation to drive modern options into the specification and interoperable merchandise to the market.
“AI is advancing at an unprecedented tempo, ushering in a brand new period of AI reasoning with new scaling legal guidelines. Because the demand for compute surges and velocity necessities proceed to develop exponentially, scale-up interconnect options should evolve to maintain tempo with these quickly altering AI workload necessities,” mentioned Sameh Boujelbene, VP at Dell’Oro Group. “We’re thrilled to see the discharge of the UALink 1.0 Specification, which rises to this problem by enabling 200G per lane scale-up connections for as much as 1,024 accelerators throughout the similar AI computing pod. This milestone marks a major step ahead in addressing the demand of next-generation AI infrastructure.”
“With the discharge of the UALink 200G 1.0 Specification, the UALink Consortium’s member corporations are actively constructing an open ecosystem for scale-up accelerator connectivity,” mentioned Peter Onufryk, UALink Consortium President. “We’re excited to witness the number of options that can quickly be getting into the market and enabling future AI purposes.”
The UALink 200G 1.0 Specification is on the market for public obtain at https://ualinkconsortium.org/specification/.