NVSwitch GPU Interconnect with NVIDIA DGX-2
The NVSwitch was introduced along side with the release of NVIDIA’s 16-GPU DGX-2 server. This allows all 16 NVLink V100 (each with only 6 ports) to communicate with each other. This is an 18-port fully connected crossbar switch, with 12 switch ASICs, that creates an NVLink fabric. Each port delivers 50 GB/s with a total of 900 GB/s of aggregate NVLink bidirectional bandwidth on one device. This allows 5x higher bandwidth over PCIe switches.
How It Works
NVSwitch is applied on a baseboard as six ASICs (each being an 18-port, switch with an 18×18 port fully-connected crossbar). Each baseboard contains six NVSwitch chips. This allows communication with another baseboard to enable 16-GPUs in a single server node. Each of the eight GPUs on a single baseboard are connected using a single NVLink to the chip. Eight of the ports on each of the NVSwitch chips are used to communicate with the other baseboard. Every one of the eight GPUs on a baseboard can communicate with any of the other GPUs on the baseboard at full 300 GB/s bandwidth with a single NVSwitch traversal.
NVSwitch Die Shot
Benefits of NVSwitch
To showcase performance, NVIDIA compared the DGX-2 with 16x Volta V100 GPUs against a pair of DGX-1 servers. Each come with 8x Volta V100 GPUs and connected by four 100 GB/s InfiniBand ports.
The results for the Mixture of Experts (MoE) test shows that the DGX-2 had 2.7x the performance compared to the pair of DGX-1’s. For the Integrated Forecasting Systems (IFS) test, it shows a 2.4x increase in speed of the single DGX-2 over the pair of DGX-1’s.