No products in the cart.
NVIDIA HGX H100 640GB 8-GPU Baseboard
$220,000.00
The Ultimate 8-GPU AI Supercomputing Baseboard
The NVIDIA HGX H100 combines eight H100 Tensor Core GPUs into a single, massive accelerator for the world’s most demanding AI and HPC workloads.
- Configuration: 8x NVIDIA H100 SXM5 GPUs
- Memory: 640GB HBM3 Total (80GB per GPU)
- Interconnect: NVSwitch & NVLink (900GB/s per GPU)
- Part Number: 935-24287-0001-000
- Ideal For: Training Massive LLMs, Generative AI, and Exascale HPC
The Ultimate 8-GPU AI Supercomputing Baseboard
The NVIDIA HGX H100 combines eight H100 Tensor Core GPUs into a single, massive accelerator for the world’s most demanding AI and HPC workloads.
- Configuration: 8x NVIDIA H100 SXM5 GPUs
- Memory: 640GB HBM3 Total (80GB per GPU)
- Interconnect: NVSwitch & NVLink (900GB/s per GPU)
- Part Number: 935-24287-0001-000
- Ideal For: Training Massive LLMs, Generative AI, and Exascale HPC
12 in stock
The World’s Premier AI Supercomputing Platform
Transform your data center into an AI factory with the NVIDIA HGX H100. This 8-GPU baseboard is the engine behind the world’s most powerful supercomputers, designed to solve the most complex problems in AI, data analytics, and high-performance computing (HPC).
Featuring eight NVIDIA H100 Tensor Core GPUs interconnected via high-speed NVSwitch, this board delivers a massive 640GB of HBM3 memory and unified compute performance. It acts as a single, giant accelerator, enabling you to train massive Large Language Models (LLMs) like GPT-4 and run trillion-parameter inference with real-time responsiveness.
Key Features
- 8x H100 SXM5 GPUs: Integrates eight H100 Tensor Core GPUs on a single baseboard, delivering unprecedented density and compute power.
- 640GB HBM3 Memory: With 80GB of ultra-fast HBM3 memory per GPU, the system offers a combined 640GB capacity with unmatched bandwidth, eliminating data bottlenecks for the largest datasets.
- NVSwitch & NVLink Interconnect: The 4th Gen NVLink and NVSwitch technology connect all eight GPUs with 900 GB/s bandwidth per GPU, allowing them to function as a single, unified accelerator for massive model parallelism.
- Transformer Engine: Supercharges AI performance with FP8 precision, speeding up training by up to 9x and inference by up to 30x compared to the previous generation.
Technical Specifications
| Feature | Specification |
| Part Number | 935-24287-0001-000 |
| GPU Configuration | 8x NVIDIA H100 SXM5 Tensor Core GPUs |
| Total Memory | 640GB HBM3 (8x 80GB) |
| Memory Bandwidth | ~27 TB/s Aggregate |
| Interconnect | 4th Gen NVLink + NVSwitch (900GB/s per GPU) |
| Performance (FP8) | 32 PFLOPS |
| Form Factor | HGX Baseboard (SXM5) |
Ideal For:
- Massive Model Training: The standard for training foundation models (LLMs, Generative AI).
- Real-Time Inference: Deploying trillion-parameter models with low latency.
- Exascale HPC: Climate modeling, genomics, and drug discovery.

Reviews
There are no reviews yet.