Powering Discovery at Exascale

LEADERSHIP PERFORMANCE AT ANY SCALE

From single-server solutions up to the world’s largest, Exascale-class supercomputers¹, AMD Instinct™ accelerators are uniquely well-suited to power even the most demanding AI and HPC workloads in your data centers.

Get more done, more efficiently, with exceptional compute performance, large memory density, high bandwidth memory, and support for specialized data formats.



UNDER THE HOOD

AMD Instinct™ accelerators are built on AMD CDNA™ architecture, which offers Matrix Core Technologies and support for a broad range of precision capabilities – from the highly efficient INT8 and FP8 (including sparsity support for AI with AMD CDNA™ 3), to the most demanding FP64 for HPC.

Underhood

AMD INSTINCT™ PORTFOLIO

MI300 SERIES

AMD INSTINCT™ MI300X ACCELERATOR

Designed to deliver leadership memory capacity and bandwidth, AMD Instinct™ MI300X accelerators are ideal for Generative AI and HPC applications. They deliver nearly 40% more compute units², 1.5x more memory capacity, 1.7x more peak theoretical memory bandwidth³ compared to previous generation AMD Instinct™ MI250X accelerators.

  • 5.3 TB/s peak theoretical memory bandwidth
  • Up to 896 GB/s AMD Infinity Fabric™ Bandwidth
  • 304 GPU compute units
  • 192 GB HBM3 memory

AMD INSTINCT™ MI300A APU

The world’s first data center accelerated processing units (APUs) for HPC and AI, leverage 3D packaging and the 4th Gen AMD Infinity Architecture to deliver leadership performance on critical workloads sitting at the convergence of HPC and AI. They combine the power of AMD Instinct™ accelerators and AMD EPYC™ processors with shared memory to enable enhanced efficiency, flexibility, and programmability.

  • 228 GPU compute units
  • 24 “Zen 4” x86 CPU cores
  • 128 GB unified HBM3 memory
  • 5.3 TB/s peak theoretical memory bandwidth
  • 5nm and 6nm process technology

AMD INSTINCT™ MI300X PLATFORM

This leadership generative AI platform integrates 8 fully connected MI300X GPU OAM modules onto an industry-standard OCP design via 4th-Gen AMD Infinity Fabric™ links, delivering up to 1.5TB HBM3 capacity for low-latency AI processing. It's ready-to-deploy, and designed to accelerate time-to-market and reduce development costs when adding AMD Instinct™ MI300X accelerators into existing AI rack and server infrastructure.

  • 42.4 TB/s peak theoretical aggregate memory bandwidth
  • 8 MI300X GPU OAM modules
  • 1.5 TB total HBM3 memory

MI200 SERIES

AMD INSTINCT™ MI250 AND MI250X ACCELERATOR

Delivers outstanding performance to power some of the world’s top supercomputers for HPC and AI

View specs

AMD INSTINCT™ MI210 ACCELERATOR

Powers enterprise, research, and academic HPC and AI workloads for single-server solutions and more

View specs

AMD ROCm™ Software

This open software stack includes a broad set of programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development – targeting AMD Instinct™ accelerators.

Accelerate AI workloads with AMD

Download our White Paper to find out how the AMD CDNA™ 3 Architecture powers the AMD Instinct™ MI300 accelerators to deliver the highest performance, efficiency, and programmability.

White paper
*mandatory field

Case Studies

KT Cloud

Korean cloud computing company KT Cloud is unleashing the possibilities of AI with AMD Instinct™ MI250 accelerators to build a cost-effective and scalable AI cloud service.

Read case study

LUMI Supercomputer

Learn how AMD Instinct™ MI250 accelerators are powering LUMI to advance groundbreaking research in climate change modeling, early cancer detection, and more.

Watch video

Oak Ridge National Laboratory - CHOLLA

Step into an alternate universe! Using AMD Instinct™ accelerators, the Cholla CAAR team is simulating the entire Milky Way.

Read case study

Oak Ridge National Laboratory - PIConGPU

Leveraging the ROCm platform, ORNL’s PIConGPU taps the immense compute capacity of AMD Instinct™ MI200 accelerators to advance radiation therapy, high-energy physics, and photon science.

Read case study
  • Top 500 list, June 2023
  • MI300-15: The AMD Instinct™ MI300X (750W) accelerator has 304 compute units (CUs), 19,456 stream cores, and 1,216 Matrix cores. The AMD Instinct™ MI250 (560W) accelerators have 208 compute units (CUs), 13,312 stream cores, and 832 Matrix cores. The AMD Instinct™ MI250X (500W/560W) accelerators have 220 compute units (CUs), 14,080 stream cores, and 880 Matrix cores.
  • MI300-13: Calculations conducted by AMD Performance Labs as of November 7, 2023, for the AMD Instinct™ MI300X OAM accelerator 750W (192 GB HBM3) designed with AMD CDNA™ 3 5nm FinFet process technology resulted in 192 GB HBM3 memory capacity and 5.325 TFLOPS peak theoretical memory bandwidth performance. MI300X memory bus interface is 8,192 (1024 bits x 8 die) and memory data rate is 5.2 Gbps for total peak memory bandwidth of 5.325 TB/s (8,192 bits memory bus interface * 5.2 Gbps memory data rate/8).