AMD Radeon Instinct MI100 ‘CDNA GPU’ Alleged Performance Numbers Show Its Faster Than NVIDIA’s A100 in FP32 Compute, Impressive Perf/Value

Hardware

Alleged performance numbers and details of AMD’s next-generation CDNA GPU based Radeon Instinct MI100 accelerator have leaked out by AdoredTV. In an exclusive post, AdoredTV covers performance benchmarks of the upcoming HPC GPU against NVIDIA’s Volta and Ampere GPUs.

AMD Radeon Instinct MI100 ‘CDNA’ GPU Performance Benchmarks Leak Out, Allegedly Faster Than NVIDIA’s Ampere A100 In FP32 Compute With Better Perf/Value

AdoredTV claims that the slides they have received are from the official AMD Radeon Instinct MI100 presentation. The ones posted on the source seem to be modified versions of the original ones but details are kept intact. In our previous post, we confirmed that the Radeon Instinct MI100 GPU was on its way to the market in 2H 2020. The slides from AdoredTV shed some more light on the launch plans and server configurations that we could expect from AMD and its partners in 2020 & beyond.

AMD 3rd Gen EPYC Milan CPU Specs & Benchmarks Leak Out, Up To 64 Cores & 280W TDPs – 128 Core Milan Server Shatters All Hope Intel Had For Ice Lake

AMD Radeon Instinct MI100 1U Server Specs

First up, AMD is planning to unveil an HPC specific server which would feature 2P design with dual AMD EPYC CPUs that could either be based on the Rome or Milan generation. Each EPYC CPU will be connected to two Radeon Instinct MI100 accelerators through the 2nd Generation Infinity Fabric interconnect. The four GPUs will be able to deliver a sustained 136 TFLOPs of FP32 (SGEMM) output which points out to around 34 TFLOPs of FP32 compute per GPU. Each Radeon Instinct MI100 GPU will have a TDP of 300W.

Additional specifications include total GPU PCIe bandwidth of 256 GB/s which is made possible on the Gen 4 protocol. The combined memory bandwidth of the four GPUs is at 4.9 TB/s which means that AMD is using HBM2e DRAM dies (each GPU pumps out 1.225 TB/s bandwidth). The combined memory pool is 128 GB or 32 GB per GPU. This suggests that AMD is still using 4 HBM2 DRAM stack technology and each stack housing 8-hi DRAM dies. It looks like XGMI won’t be offered on standard configurations and will be kept limited to specialized 1U racks.

As far as availability is concerned, the 1U server with AMD EPYC (Rome / Milan) HPC CPUs is said to launch by December 2020 while an Intel Xeon variant is also expected to launch in February 2021.

ASUS Radeon RX 6900 XT ROG STRIX LC, PowerColor RX 6900 XT Red Devil Limited & Gigabyte RX 6900 XT Gaming OC Custom Cards Unveiled

AMD Radeon Instinct MI100 3U Server Specs

The second 3U server is expected to launch in March 2021 and will offer even beefier specifications such as 8 Radeon Instinct MI100 GPUs connected to two EPYC CPUs. Each group of four Instinct MI 100’s will be connected together through an XGMI (100 GB/s bi-directional) and a quad bandwidth of 1.2 TB/s. The four Instinct accelerators would equal a total of 272 TFLOPs of FP32 compute, 512 GB per second PCIe bandwidth, 9.8 TB/s HBM bandwidth, and 256 GB of memory DRAM capacity. The rack will have a rated power draw of 3kW.

AMD Radeon Instinct Accelerators 2020

Accelerator Name AMD Radeon Instinct MI6 AMD Radeon Instinct MI8 AMD Radeon Instinct MI25 AMD Radeon Instinct MI50 AMD Radeon Instinct MI60 AMD Radeon Instinct MI100
GPU Architecture Polaris 10 Fiji XT Vega 10 Vega 20 Vega 20 Arcturus
GPU Process Node 14nm FinFET 28nm 14nm FinFET 7nm FinFET 7nm FinFET 7nm FinFET
GPU Cores 2304 4096 4096 3840 4096 7680
GPU Clock Speed 1237 MHz 1000 MHz 1500 MHz 1725 MHz 1800 MHz ~1500 MHz
FP16 Compute 5.7 TFLOPs 8.2 TFLOPs 24.6 TFLOPs 26.5 TFLOPs 29.5 TFLOPs 185 TFLOPs
FP32 Compute 5.7 TFLOPs 8.2 TFLOPs 12.3 TFLOPs 13.3 TFLOPs 14.7 TFLOPs 23.1 TFLOPs
FP64 Compute 384 GFLOPs 512 GFLOPs 768 GFLOPs 6.6 TFLOPs 7.4 TFLOPs 11.5 TFLOPs
VRAM 16 GB GDDR5 4 GB HBM1 16 GB HBM2 16 GB HBM2 32 GB HBM2 32 GB HBM2
Memory Clock 1750 MHz 500 MHz 945 MHz 1000 MHz 1000 MHz 1200 MHz
Memory Bus 256-bit bus 4096-bit bus 2048-bit bus 4096-bit bus 4096-bit bus 4096-bit bus
Memory Bandwidth 224 GB/s 512 GB/s 484 GB/s 1 TB/s 1 TB/s 1.23 TB/s
Form Factor Single Slot, Full Length Dual Slot, Half Length Dual Slot, Full Length Dual Slot, Full Length Dual Slot, Full Length Dual Slot, Full Length
Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling Passive Cooling
TDP 150W 175W 300W 300W 300W 300W

AMD’s Radeon Instinct MI100 ‘CDNA GPU’ Performance Numbers, An FP32 Powerhouse In The Making?

In terms of performance, the AMD Radeon Instinct MI100 was compared to the NVIDIA Volta V100 and the NVIDIA Ampere A100 GPU accelerators. Interestingly, the slides mention a 300W Ampere A100 accelerator although no such configuration exists which means that these slides are based on a hypothesized A100 configuration rather than an actual variant which comes in two flavors, the 400W config in the SXM form factor and the 250W config which comes in the PCIe form factor.

As per the benchmarks, the Radeon Instinct MI100 delivers around 13% better FP32 performance versus the Ampere A100 and over 2x performance increase versus the Volta V100 GPUs. The perf to value ratio is also compared with the MI100 offering around 2.4x better value compared to the V100S and 50% better value than the Ampere A100. It is also shown that the performance scaling is near-linear even with up to 32 GPU configurations in Resenet which is quite impressive.

AMD Radeon Instinct MI100 vs NVIDIA’s Ampere A100 HPC Accelerator (Image Credits: AdoredTV):

With that said, the slides also mention that AMD will offer much better performance and value in three specific segments which include Oil & Gas, Academia, and HPC & Machine Learning. In the rest of the HPC workloads such as FP64 compute, AI, and Data Analytics, NVIDIA will offer much superior performance with its A100 accelerator. NVIDIA also holds the benefit of Multi-Instance GPU architecture over AMD. The performance metrics show 2.5x better FP64 performance, 2x better FP16 performance, and twice the tensor performance thanks to the latest gen Tensor cores on the Ampere A100 GPU.

One thing that needs to be highlighted is that AMD hasn’t mentioned NVIDIA’s sparsity numbers anywhere in the benchmarks. With sparsity, NVIDIA’s Ampere A100 boasts up to 156 TFLOPs of horsepower though it seems like AMD just wanted to do a specific benchmark comparison versus the Ampere A100. From the looks of it, the Radeon Instinct MI100 does seem to be a decent HPC offering if the performance and value numbers hold up at launch.

Leave a Reply

Your email address will not be published. Required fields are marked *