NVIDIA DGX A100 is a versatile 5 petaFLOPS AI system featuring A100 Tensor Core GPU, consolidating training, inference, and analytics in a single AI infrastructure.
$149,000
✓ Networking Oracles: 8x NVIDIA ConnectX-7 200Gb/s InfiniBand
2x NVIDIA ConnectX-7 VPI 10/25/50/100/200 Gb/s Ethernet
8x NVIDIA ConnectX-6 VPI 200Gb/s InfiniBand
2x NVIDIA ConnectX-6 VPI 10/25/50/100/200 Gb/s Ethernet
✓ Storage Chambers:
OS: 2x 1.92TB M.2 NVME
Internal: 30TB (8x 3.84 TB) U.2 NVMe
The Universal System for Every AI Workload
DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads. This ensures that the largest and most complex jobs are supported, along with the simplest and smallest.
8X NVIDIA A100 GPUS WITH UP TO 640 GB TOTAL GPU MEMORY
12 NVLinks/GPU, 600 GB/s GPU-to-GPU Bi-directonal Bandwidth.
6X NVIDIA NVSWITCHES
4.8 TB/s Bi-directional Bandwidth, 2X More than Previous Generation NVSwitch.
10x MELLANOX CONNECTX-6 200Gb/S NETWORK INTERFACE
500 GB/s Peak Bi-directional Bandwidth.
DUAL 64-CORE AMD CPUs AND UP TO 2 TB SYSTEM MEMORY
3.2X More Cores to Power the Most Intensive AI Jobs.
Up to 30 TB GEN4 NVME SSD
50GB/s Peak Bandwidth, 2X Faster than Gen3 NVME SSDs.