Glossary

What is NVLink?: rawcompute.in Glossary

NVLink is NVIDIA's proprietary high-speed interconnect that enables direct GPU-to-GPU communication at bandwidths far exceeding PCIe, essential for multi-GPU training workloads.

NVLink is a point-to-point interconnect developed by NVIDIA to allow GPUs to communicate with each other (and in some generations, with the CPU) at speeds that PCIe cannot match. The fourth-generation NVLink, used in the H100 SXM5, provides 900 GB/s of bidirectional bandwidth, roughly 7x the bandwidth of a PCIe Gen5 x16 link. Each H100 SXM5 has 18 NVLink 4.0 links, and when combined with NVSwitch 3.0, all eight GPUs on an HGX baseboard enjoy full all-to-all connectivity at the full 900 GB/s.

NVLink is critical for training strategies that split a model across multiple GPUs, such as tensor parallelism and pipeline parallelism. Without a high-bandwidth inter-GPU fabric, gradient synchronisation and activation transfer become bottlenecks, reducing training efficiency. For inference, NVLink allows multiple GPUs to share a unified KV-cache, enabling serving of large-context models that exceed single-GPU VRAM capacity.

Why it matters when buying hardware

If you plan to run multi-GPU workloads, especially LLM training, NVLink-capable SKUs (SXM5 form factor) dramatically outperform their PCIe counterparts. PCIe-attached GPUs can still be useful for single-GPU inference or embarrassingly parallel tasks, but for anything requiring tight GPU-to-GPU communication, NVLink is non-negotiable. When purchasing from rawcompute.in, confirm whether the server chassis supports SXM5 baseboards with NVSwitch to get full NVLink bandwidth across all GPUs.

Need hardware advice?

Tell us your requirements and we'll recommend the right setup.

WhatsApp Us

Get a Quote

We respond within 4 business hours

Same-day responseNo spam, everGST invoice