Meet the New AMD Instinct™ MI325X Accelerators

The AMD Instinct™ MI325X GPU accelerator sets new standards in AI performance with 3rd Gen AMD CDNA™ architecture, delivering incredible performance and efficiency for training and inference. With industry leading 256 GB HBM3E memory and 6 TB/s bandwidth, they optimize performance and help reduce TCO.1

AMD Instinct™ MI325X Accelerators

Leadership Performance at Any Scale

AMD Instinct™ accelerators enable leadership performance for the data center, at any scale—from single-server solutions up to the world’s largest, Exascale-class supercomputers.2

They are uniquely well-suited to power even the most demanding AI and HPC workloads, offering exceptional compute performance, large memory density, high bandwidth memory, and support for specialized data formats.

Add Alt Text

Under the Hood

AMD Instinct accelerators are built on AMD CDNA™ architecture, which offers Matrix Core Technologies and support for a broad range of precision capabilities—from the highly efficient INT8 and FP8 (including sparsity support for AI with AMD CDNA 3), to the most demanding FP64 for HPC.

Portfolio

MI300 Series


AMD Instinct APU

Combines AMD Instinct accelerators and AMD EPYC processors with shared memory for enhanced flexibility, efficiency, and programmability

MI200 Series


AMD Instinct MI250X Accelerator

Powers some of the world’s top supercomputers for HPC and AI

AMD Instinct MI250 Accelerator

Delivers outstanding performance for enterprise, research, and academic HPC and AI workloads

AMD Instinct MI210 Accelerator

Powers enterprise, research, and academic HPC and AI workloads for single-server solutions and more

AMD ROCm™ Software

AMD ROCm™ software includes a broad set of programming models, tools, compilers, libraries, and runtimes for AI models and HPC workloads targeting AMD Instinct accelerators.

Accessible in the Cloud

AMD Instinct accelerators are available in the cloud to meet the scalability, flexibility, and performance demands of complex compute workloads like AI.

Microsoft Azure

The Azure ND MI300X v5, powered by 8x AMD Instinct™ MI300X accelerators, is optimized on Azure for AI training and inferencing.

IBM Cloud

An enterprise cloud platform designed for even the most regulated industries, delivering a highly resilient, performant, secure and compliant cloud.

Vultr

Independent cloud platform with 32 global data centers and AMD Instinct™ MI300X GPUs

Cirrascale Cloud Services

Supporting the latest AMD Instinct™ accelerators, the Cirrascale AI Innovation Cloud is purpose-built to power the most demanding AI and HPC workloads. 

TensorWave

Offering AMD Instinct™ Series GPUs and a best-in-class inference engine, TensorWave's AI cloud platform is a top-choice for training, fine-tuning, and inference.

Nscale

Access AMD Instinct™ MI300X & MI250X GPUs with Nscale's AI cloud, built for compute-intensive workloads.

Hot Aisle

On-demand high-performance computing with advanced, top-tier bare metal compute solutions based on AMD Instinct accelerators.

Aligned

Scale your custom AI workloads with ease using Aligned’s cloud, featuring AMD Instinct™ accelerators, for unmatched performance and efficiency.

Baionity

Baionity leverages AMD Instinct GPUs and an ISO-certified data center to harness the full potential of AI technology for AI workload efficiency, security, and scalability while ensuring data integrity and compliance with industry standards.

Evaluate AMD Instinct™ in the Cloud

The AMD Instinct Evaluation Program allows startups and enterprises to evaluate their AI & HPC workload requirements on AMD Instinct™ GPUs and AMD ROCm™ software stack through AMD partners.

Find Solutions

Find a partner offering AMD Instinct accelerator-based solutions.

Case Studies

Resources

Documentation

Find solution briefs, white papers, programmer references, and more documentation for AMD Instinct accelerators. 

Stay Informed

Sign up to receive the latest data center news and server content.

Footnotes
  1. MI325-001A - Calculations conducted by AMD Performance Labs as of September 26th, 2024, based on current specifications and /or estimation. The AMD Instinct™ MI325X OAM accelerator will have 256GB HBM3E memory capacity and 6 TB/s GPU peak theoretical memory bandwidth performance. Actual results based on production silicon may vary.
    The highest published results on the NVidia Hopper H200 (141GB) SXM GPU accelerator resulted in 141GB HBM3E memory capacity and 4.8 TB/s GPU memory bandwidth performance.  https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446
    The highest published results on the NVidia Blackwell HGX B100 (192GB) 700W GPU accelerator resulted in 192GB HBM3E memory capacity and 8 TB/s GPU memory bandwidth performance.
    The highest published results on the NVidia Blackwell HGX B200 (192GB) GPU accelerator resulted in 192GB HBM3E memory capacity and 8 TB/s GPU memory bandwidth performance.
    Nvidia Blackwell specifications at https://resources.nvidia.com/en-us-blackwell-architecture?_gl=1*1r4pme7*_gcl_aw*R0NMLjE3MTM5NjQ3NTAuQ2p3S0NBancyNkt4QmhCREVpd0F1NktYdDlweXY1dlUtaHNKNmhPdHM4UVdPSlM3dFdQaE40WkI4THZBaW
  2. Top 500 list, June 2024