Supporting the Future of AI and HPC Computing
AI and high-performance computing (HPC) continue to transform industries everywhere, enabling automation and improved decision making, and making new products and services possible. Across nearly every field and every industry, new opportunities lead toward better productivity, business outcomes, and a brighter future.
However, the insatiable demand for compute is driving requirements for more powerful hardware. As more businesses build systems large enough to deliver the level of generative AI and machine learning required to support millions of users, concern shifts to efficiency and space – how do customers configure solutions that require many components, many systems, and considerable energy?
For customers looking to take advantage of AI and HPC in their business, AMD Instinct™ accelerators are here to make it happen, offering incredible performance without leaps in server size or energy use, made possible by revolutions in architecture and product innovations. AMD is pleased to announce availability for the AMD Instinct MI300X accelerator – a GPU product with the power customers need to meet the most demanding AI and HPC workloads – whatever industry they’re serving.
Accelerating AI and HPC Into the Exascale Era
Based on the AMD CDNA™ architecture, AMD Instinct accelerators redefine computing for the modern business. Engineered from the ground up for the exascale era, this process delivers huge gains in performance when compared to previous generation GCN-based products such as AMD Radeon™ Instinct products.1 Now, with a new generation of AMD Instinct products, that leap in performance continues to grow.
AMD Instinct MI300 Series accelerators, designed with the 3rd Gen AMD CDNA architecture, were revealed earlier this year with the introduction of the AMD Instinct™ MI300A APU accelerator, the world’s first APU specifically for AI and HPC workloads. A powerful accelerator for breakthrough density and efficiency, the AMD Instinct MI300A accelerator combines CPU, GPU, and high-bandwidth memory (HBM3) into one APU for versatile performance across a range of cutting-edge workloads that benefit from accelerated hardware.
Now, as AI continues to grow in both possible applications and demand, the AMD Instinct MI300X GPU accelerator joins existing AMD products in market to deliver the level of performance customers seek. Forgoing CPU cores, the AMD Instinct MI300X accelerator instead focuses solely on delivering raw GPU power, with the ability to pack up to eight GPUs into one node. The result? Incredible performance and HBM3 memory capacity supported by leading memory bandwidth to help take the acceleration of longstanding HPC workloads and the recent explosion of generative AI compute demands to new heights.2
AMD Instinct MI300X accelerators offer 192GB of HBM3 memory, providing ~2.4x the density of competitor products, supported by up to 5.3 TB/s of peak memory bandwidth and offering ~1.6x the bandwidth of available competitor products.2
Efficiency at Scale
As AI workloads scale, the hardware powering them needs to as well. Space quickly becomes a premium as companies scale to meet demand, and it becomes the bottleneck customers must overcome.
It’s with this in mind that AMD Instinct MI300X accelerators provide a UBB industry- standard OCP platform design drop-in solution that enables customers to combine eight GPUs in a single, performance-driven node with a fully connected peer-to-peer ring design with a total of 1.5 TB HBM3 memory in a single platform – a performance-dense solution for any AI or HPC workload deployment.
AMD ROCm™: The Open Software Platform Full of AI Possibility
AMD ROCm™ is the industry’s only open software platform for GPU computing, releasing customers from being locked into limited options from any one vendor. With an open and portable software platform, customers can gain architecture flexibility and the freedom to do more with their hardware.
AMD is helping to drive possibility forward in the AI space, thanks to innovative products like the AMD MI300X accelerator. It’s time to learn more about AMD Instinct accelerators, with an open ecosystem, increasingly broad range of powerful products for server users, and tailored solutions through dedicated and adaptable architectures. Unleash the possibilities in AI and high-performance computing today.
Reach out to your AMD representative today for more information, or to learn more visit the AMD Instinct Hub.
AMD Arena
Enhance your AMD product knowledge with training on AMD Ryzen™ PRO, AMD EPYC™, AMD Instinct™, and more.
Subscribe
Get monthly updates on AMD’s latest products, training resources, and Meet the Experts webinars.

Related Articles
Related Training Courses
Related Webinars
Footnotes
- MI100-04: Calculations performed by AMD Performance Labs as of Sep 18, 2020 for the AMD Instinct™ MI100 accelerator at 1,502 MHz peak boost engine clock resulted in 184.57 TFLOPS peak theoretical half precision (FP16) and 46.14 TFLOPS peak theoretical single precision (FP32) matrix floating-point performance. The results calculated for Radeon Instinct™ MI50 GPU at 1,725 MHz peak engine clock resulted in 26.5 TFLOPS peak theoretical half precision (FP16) and 13.25 TFLOPS peak theoretical single precision (FP32) matrix floating-point performance. Server manufacturers may vary configuration offerings yielding different results.
- MI300-05A: Calculations conducted by AMD Performance Labs as of May 17, 2023, for the AMD Instinct™ MI300X OAM accelerator 750W (192 GB HBM3) designed with AMD CDNA™ 3 5nm FinFet process technology resulted in 192 GB HBM3 memory capacity and 5.218 TFLOPS sustained peak memory bandwidth performance. MI300X memory bus interface is 8,192 and memory data rate is 5.6 Gbps for total sustained peak memory bandwidth of 5.218 TB/s (8,192 bits memory bus interface * 5.6 Gbps memory data rate/8)*0.91 delivered adjustment. The highest published results on the NVidia Hopper H100 (80GB) SXM GPU accelerator resulted in 80GB HBM3 memory capacity and 3.35 TB/s GPU memory bandwidth performance.
- MI100-04: Calculations performed by AMD Performance Labs as of Sep 18, 2020 for the AMD Instinct™ MI100 accelerator at 1,502 MHz peak boost engine clock resulted in 184.57 TFLOPS peak theoretical half precision (FP16) and 46.14 TFLOPS peak theoretical single precision (FP32) matrix floating-point performance. The results calculated for Radeon Instinct™ MI50 GPU at 1,725 MHz peak engine clock resulted in 26.5 TFLOPS peak theoretical half precision (FP16) and 13.25 TFLOPS peak theoretical single precision (FP32) matrix floating-point performance. Server manufacturers may vary configuration offerings yielding different results.
- MI300-05A: Calculations conducted by AMD Performance Labs as of May 17, 2023, for the AMD Instinct™ MI300X OAM accelerator 750W (192 GB HBM3) designed with AMD CDNA™ 3 5nm FinFet process technology resulted in 192 GB HBM3 memory capacity and 5.218 TFLOPS sustained peak memory bandwidth performance. MI300X memory bus interface is 8,192 and memory data rate is 5.6 Gbps for total sustained peak memory bandwidth of 5.218 TB/s (8,192 bits memory bus interface * 5.6 Gbps memory data rate/8)*0.91 delivered adjustment. The highest published results on the NVidia Hopper H100 (80GB) SXM GPU accelerator resulted in 80GB HBM3 memory capacity and 3.35 TB/s GPU memory bandwidth performance.