EXETON Logo

Reference Architecture

Pre-validated infrastructure designs to accelerate your deployment and reduce risk.

Architecture Library

AI TrainingNVIDIAInfiniBand

NVIDIA DGX SuperPOD

Turnkey AI supercomputing infrastructure with DGX systems, InfiniBand networking, and high-performance storage — validated for large-scale AI training.

  • Up to 32 DGX nodes
  • 400Gb/s InfiniBand
  • Petabyte-scale storage
AI InferenceMulti-GPULow Latency

GPU Cluster for AI Inference

Optimized multi-node GPU cluster architecture for low-latency AI inference workloads with high-throughput networking and load balancing.

  • 8–64 GPU nodes
  • 100Gb/s Ethernet
  • Kubernetes orchestration
HPCScientific ComputingParallel I/O

HPC Cluster with Parallel Storage

High-performance computing cluster designed for scientific computing workloads with DDN or VAST parallel file systems.

  • AMD EPYC CPUs
  • HDR InfiniBand
  • Lustre / GPFS
Hybrid CloudAI PlatformKubernetes

Hybrid Cloud AI Platform

Reference architecture for hybrid on-prem and cloud AI deployments, with seamless workload portability and unified management.

  • On-prem GPU nodes
  • Cloud burst capability
  • MLOps pipeline
CoolingData CenterHigh Density

Data Center Liquid Cooling

Complete liquid cooling reference design for high-density GPU racks, covering CDU placement, piping, and thermal management.

  • Direct-to-chip cooling
  • Rear-door heat exchangers
  • 100kW+ per rack
StorageNVMeData Pipeline

Enterprise Storage Tier Architecture

Multi-tier storage architecture combining flash, NVMe, and HDD for AI data pipelines with automated tiering and data lifecycle management.

  • All-flash primary
  • NVMe-oF fabric
  • Object storage archive

Need a Custom Architecture?

Our solution architects can design a bespoke reference architecture tailored to your workload and scale requirements.