A New Standard in AI Performance
Customers investing in AI accelerators are projected to hit figures of $500 billion by 2028; in just four short years, accelerators will be worth half a trillion dollars to businesses. The levels of productivity, enhancement, and revolutionizing that AI is bringing to businesses is unparalleled, and business leaders know it. It’s why they’ve already invested billions, transforming the way they work. Millions of people already rely on AMD Instinct™ accelerators every day by using applications that are run with popular AI models like GPT 4, Llama 3.1 405B, and many of the one million+ open source models on the Hugging Face platform.
That level of productivity is only going to skyrocket. In fact, with the launch of the new AMD Instinct™ MI325X accelerators, AMD is ensuring it happens sooner rather than later.
AMD Instinct™ MI325X Accelerators
Where Vast Memory Meets Leadership Performance
AMD Instinct™ MI325X accelerators set a new standard when it comes to performance for generative AI models and data centers. Built on 3rd generation AMD CDNA™ architecture, they’re designed to deliver exceptional performance and efficiency across a range of demanding AI tasks, including training models and inferencing.
Such intensive AI applications require a lot of memory, which is why you’ll find industry-leading 256GB of next-gen HBM3e memory capacity and 6TB/s of bandwidth. Combined with the processing power and broad datatype support required, AMD Instinct MI325X accelerators deliver the levels of performance businesses need for virtually any AI solution.1
Comparing AMD Instinct MI325X accelerators to competing products, improvements of up to 1.4x the leadership inference performance can be seen in models like Mixtral 8x7B, Mistral 7B, and Meta Llama-3.1 70B.2,3,4
While performance numbers rise and productivity with them, customers will enjoy the industry-leading memory capacity and the benefits it provides; enabling customers to use fewer GPUs with AI large-language models, smaller clusters can be used to achieve the same or better results than with previous generation products.5 In summary, smaller deployment footprints, streamlined deployments, and contributions to energy savings are the result. AMD Instinct MI325X accelerators are the clear choice for businesses who want extreme performance without an extreme TCO.
The AMD Instinct™ MI325X Platform
An Uncompromising Compute Leadership Foundation
Large language models and generative AI today require three things to deliver fast results: fast acceleration across multiple data types, large memory and bandwidth to handle huge data sets, and intensive I/O bandwidth.
With the platform around these new accelerators, customers get all three. The new industry-standard baseboard (UBB 2.0) hosts up to eight AMD Instinct™ MI325X accelerators and 2TB of HBM3e memory to help process even the most demanding AI models, and with eight x16 PCIe® Gen 5 host I/O connections and AMD Infinity Fabric™ mesh technology that provides direct connectivity between each accelerator, data bottlenecks are a thing of the past.
Compared to similar competitor platforms, the MI325X platform delivers 1.8x the memory capacity, 1.3x the memory bandwidth, and a huge leap forward with up to 1.4x higher inference performance. 6, 7, 8
For customers looking to upgrade from existing AMD Instinct infrastructure, AMD Instinct MI325X accelerators offer drop-in compatibility with the AMD Instinct™ MI300X platform, keeping time to market swift and minimizing costly infrastructure changes.
Accelerator |
Architecture |
Memory |
Memory Bandwidth |
FP8 Performance |
FP16 Performance |
AMD Instinct™ MI325X |
AMD CDNA™ 3 |
256GB HBM3e |
6 TB/s |
2.6 PF |
1.3 PF |
AMD ROCm™ Platform
Accelerating AI Inferencing and Training with Open Software
AMD Instinct™ MI325X accelerators leverage the power of AMD ROCm™ software, the foundation of AMD accelerated computing, enabling incredible capabilities to users, whether they’re working on next-gen AI applications, cutting-edge AI models, or optimizing complex simulations.
Customers opting for AMD accelerators can enjoy day-zero support for industry-standard frameworks including PyTorch and TensorFlow, simplifying AI model migration and deployments, and requiring minimal code changes. Additionally, the latest AMD ROCm release further improves training by 1.8x and GPU inference performance by 2.4x on AMD Instinct accelerators, with optimized compilers, libraries, and runtime support – helping to ensure fast model convergence, accurate model predictions, and incredibly efficient GPU utilization.9,10
Want to learn more about AMD Instinct™ MI325X accelerators? Head to AMD.com, or speak to your AMD representative for more information and availability.
AMD Arena
Enhance your AMD product knowledge with training on AMD Ryzen™ PRO, AMD EPYC™, AMD Instinct™, and more.
Subscribe
Get monthly updates on AMD’s latest products, training resources, and Meet the Experts webinars.

Related Articles
Related Training Courses
Related Webinars
Footnotes
- Calculations conducted by AMD Performance Labs as of September 26th, 2024, based on current specifications and /or estimation. The AMD Instinct™ MI325X OAM accelerator will have 256GB HBM3e memory capacity and 6 TB/s GPU peak theoretical memory bandwidth performance. Actual results based on production silicon may vary. The highest published results on the NVidia Hopper H200 (141GB) SXM GPU accelerator resulted in 141GB HBM3e memory capacity and 4.8 TB/s GPU memory bandwidth performance. https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446. The highest published results on the NVidia Blackwell HGX B100 (192GB) 700W GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. The highest published results on the NVidia Blackwell HGX B200 (192GB) GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. Nvidia Blackwell specifications at https://resources.nvidia.com/en-us-blackwell-architecture. MI325-001A
- MI325-004: Based on testing completed on 9/28/2024 by AMD performance lab measuring text generated throughput for Mixtral-8x7B model using FP16 datatype. Test was performed using input length of 128 tokens and an output length of 4096 tokens for the AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator. 1x MI325X at 1000W with vLLM performance Vs. 1x H200 at 700W with TensorRT-LLM v0.13. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-004
- MI325-005: Based on testing completed on 9/28/2024 by AMD performance lab measuring overall latency for Mistral-7B model using FP16 datatype. Test was performed using input length of 128 tokens and an output length of 128 tokens for the AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-005
- MI325-006: Based on testing completed on 9/28/2024 by AMD performance lab measuring overall latency for LLaMA 3.1-70B model using FP8 datatype. Test was performed using input length of 2048 tokens and an output length of 2048 tokens for the following configurations of AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-006
- MI325-003A: Calculated estimates based on GPU-only memory size versus memory required by the model at defined parameters plus 10% overhead. Calculations rely on published and sometimes preliminary model memory sizes. PaLM 1, Llama 3.1 405B, Mixtral 8x22B and Samba-1 results estimated on MI325X and H200 due to system/part availability.
Results (Calculated):
Required GPUs: MI325X vs. H200
PaLM-1 (540B) 5 9
Llama 3.1 (405B) 4 7
Mixtral 8x22B (141B) 2 3
Samba-1 (1T) 9 16
Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations.
- MI325-001A: Calculations conducted by AMD Performance Labs as of September 26th, 2024, based on current specifications and /or estimation. The AMD Instinct™ MI325X OAMaccelerator will have 256GB HBM3e memory capacity and 6 TB/s GPU peak theoretical memory bandwidth performance. Actual results based on production silicon may vary. The highest published results on the NVidia Hopper H200 (141GB) SXM GPU accelerator resulted in 141GB HBM3e memory capacity and 4.8 TB/s GPU memory bandwidth performance: https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446. The highest published results on the NVidia Blackwell HGX B100 (192GB) 700W GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. The highest published results on the NVidia Blackwell HGX B200 (192GB) GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. Nvidia Blackwell specifications at https://resources.nvidia.com/en-us-blackwell-architecture .
- MI325-002: Calculations conducted by AMD Performance Labs as of May 28th, 2024 for the AMD Instinct™ MI325X GPU resulted in 1307.4 TFLOPS peak theoretical half precision (FP16), 1307.4 TFLOPS peak theoretical Bfloat16 format precision (BF16), 2614.9 TFLOPS peak theoretical 8-bit precision (FP8), 2614.9 TOPs INT8 floating-point performance. Actual performance will vary based on final specifications and system configuration.
Published results on Nvidia H200 SXM (141GB) GPU: 989.4 TFLOPS peak theoretical half precision tensor (FP16 Tensor), 989.4 TFLOPS peak theoretical Bfloat16 tensor format precision (BF16 Tensor), 1,978.9 TFLOPS peak theoretical 8-bit precision (FP8), 1,978.9 TOPs peak theoretical INT8 floating-point performance. BFLOAT16 Tensor Core, FP16 Tensor Core, FP8 Tensor Core and INT8 Tensor Core performance were published by Nvidia using sparsity; for the purposes of comparison, AMD converted these numbers to non-sparsity/dense by dividing by 2, and these numbers appear above.
Nvidia H200 source: https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446 and https://www.anandtech.com/show/21136/nvidia-at-sc23-h200-accelerator-with-hbm3e-and-jupiter-supercomputer-for-2024
Note: Nvidia H200 GPUs have the same published FLOPs performance as H100 products https://resources.nvidia.com/en-us-tensor-core. MI325-002
- MI325-014: Based on testing completed on 10/08/2024 by AMD performance lab measuring text generated throughput for LLaMA 3.1-405B model using FP8 datatype. Test was performed using input length of 128 tokens and an output length of 2048 tokens for the following configurations of AMD Instinct™ MI325X 8xGPU platform and NVIDIA H200 HGX GPU platform. 8xGPU MI325X platform with vLLM performance Vs. NVIDIA published results Configurations: MI325X 8xGPU Platform Configuration Dell PowerEdge XE9680 with 2x Intel Xeon Platinum 8480+ Processors, 8x AMD Instinct MI325X (256GiB, 1000W) GPUs, Ubuntu 22.04, and a pre-release build of ROCm 6.3 vs Nvidia published results for TensorRT-LLM v0.13 were captured from: https://github.com/NVIDIA/TensorRT-LLM/blob/v0.13.0/docs/source/performance/perf-overview.md - 3039.7 output tokens/s. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-014
- MI300-61: Measurements conducted by AMD AI Product Management team on AMD Instinct™ MI300X GPU for comparing large language model (LLM) performance with optimization methodologies enabled and disabled as of 9/28/2024 on Llama 3.1-70B and Llama 3.1-405B and vLLM 0.5.5.
System Configurations:
AMD EPYC 9654 96-Core Processor, 8 x AMD MI300X, ROCm™ 6.1, Linux® 7ee7e017abe3 5.15.0-116-generic #126-Ubuntu® SMP Mon Jul 1 10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux, Frequency boost: enabled. Performance may vary on factors including but not limited to different versions of configurations, vLLM, and drivers.
- MI300-62: Testing conducted by internal AMD Performance Labs as of September 29, 2024 inference performance comparison between ROCm 6.2 software and ROCm 6.0 software on the systems with 8 AMD Instinct™ MI300X GPUs coupled with Llama 3.1-8B, Llama 3.1-70B, Mixtral-8x7B, Mixtral-8x22B, and Qwen 72B models. ROCm 6.2 with vLLM 0.5.5 performance was measured against the performance with ROCm 6.0 with vLLM 0.3.3, and tests were performed across batch sizes of 1 to 256 and sequence lengths of 128 to 2048.
Configurations:
1P AMD EPYC™ 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5 TiB (24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, , ROCm 6.2.0-00, vLLM 0.5.5, PyTorch 2.4.0, Ubuntu® 22.04 LTS with Linux kernel 5.15.0-119-generic.
vs.
1P AMD EPYC 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5TiB 24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, ROCm 6.0.0-00, vLLM 0.3.3, PyTorch 2.1.1, Ubuntu 22.04 LTS with Linux kernel 5.15.0-119-generic.
Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including but not limited to different versions of configurations, vLLM, and drivers.
DISCLAIMER: The information contained herein is for informational purposes only and is subject to change without notice. While every precaution has been taken in the preparation of this document, it may contain technical inaccuracies, omissions and typographical errors, and AMD is under no obligation to update or otherwise correct this information. Advanced Micro Devices, Inc. makes no representations or warranties with respect to the accuracy or completeness of the contents of this document, and assumes no liability of any kind, including the implied warranties of noninfringement, merchantability or fitness for particular purposes, with respect to the operation or use of AMD hardware, software or other products described herein. No license, including implied or arising by estoppel, to any intellectual property rights is granted by this document. Terms and limitations applicable to the purchase or use of AMD products are as set forth in a signed agreement between the parties or in AMD's Standard Terms and Conditions of Sale. GD-18u.
© 2024 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, EPYC, Instinct, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners. Certain AMD technologies may require third-party enablement or activation. Supported features may vary by operating system. Please confirm with the system manufacturer for specific features. No technology or product can be completely secure.
- Calculations conducted by AMD Performance Labs as of September 26th, 2024, based on current specifications and /or estimation. The AMD Instinct™ MI325X OAM accelerator will have 256GB HBM3e memory capacity and 6 TB/s GPU peak theoretical memory bandwidth performance. Actual results based on production silicon may vary. The highest published results on the NVidia Hopper H200 (141GB) SXM GPU accelerator resulted in 141GB HBM3e memory capacity and 4.8 TB/s GPU memory bandwidth performance. https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446. The highest published results on the NVidia Blackwell HGX B100 (192GB) 700W GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. The highest published results on the NVidia Blackwell HGX B200 (192GB) GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. Nvidia Blackwell specifications at https://resources.nvidia.com/en-us-blackwell-architecture. MI325-001A
- MI325-004: Based on testing completed on 9/28/2024 by AMD performance lab measuring text generated throughput for Mixtral-8x7B model using FP16 datatype. Test was performed using input length of 128 tokens and an output length of 4096 tokens for the AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator. 1x MI325X at 1000W with vLLM performance Vs. 1x H200 at 700W with TensorRT-LLM v0.13. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-004
- MI325-005: Based on testing completed on 9/28/2024 by AMD performance lab measuring overall latency for Mistral-7B model using FP16 datatype. Test was performed using input length of 128 tokens and an output length of 128 tokens for the AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-005
- MI325-006: Based on testing completed on 9/28/2024 by AMD performance lab measuring overall latency for LLaMA 3.1-70B model using FP8 datatype. Test was performed using input length of 2048 tokens and an output length of 2048 tokens for the following configurations of AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-006
- MI325-003A: Calculated estimates based on GPU-only memory size versus memory required by the model at defined parameters plus 10% overhead. Calculations rely on published and sometimes preliminary model memory sizes. PaLM 1, Llama 3.1 405B, Mixtral 8x22B and Samba-1 results estimated on MI325X and H200 due to system/part availability.
Results (Calculated):
Required GPUs: MI325X vs. H200
PaLM-1 (540B) 5 9
Llama 3.1 (405B) 4 7
Mixtral 8x22B (141B) 2 3
Samba-1 (1T) 9 16
Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations.
- MI325-001A: Calculations conducted by AMD Performance Labs as of September 26th, 2024, based on current specifications and /or estimation. The AMD Instinct™ MI325X OAMaccelerator will have 256GB HBM3e memory capacity and 6 TB/s GPU peak theoretical memory bandwidth performance. Actual results based on production silicon may vary. The highest published results on the NVidia Hopper H200 (141GB) SXM GPU accelerator resulted in 141GB HBM3e memory capacity and 4.8 TB/s GPU memory bandwidth performance: https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446. The highest published results on the NVidia Blackwell HGX B100 (192GB) 700W GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. The highest published results on the NVidia Blackwell HGX B200 (192GB) GPU accelerator resulted in 192GB HBM3e memory capacity and 8 TB/s GPU memory bandwidth performance. Nvidia Blackwell specifications at https://resources.nvidia.com/en-us-blackwell-architecture .
- MI325-002: Calculations conducted by AMD Performance Labs as of May 28th, 2024 for the AMD Instinct™ MI325X GPU resulted in 1307.4 TFLOPS peak theoretical half precision (FP16), 1307.4 TFLOPS peak theoretical Bfloat16 format precision (BF16), 2614.9 TFLOPS peak theoretical 8-bit precision (FP8), 2614.9 TOPs INT8 floating-point performance. Actual performance will vary based on final specifications and system configuration.
Published results on Nvidia H200 SXM (141GB) GPU: 989.4 TFLOPS peak theoretical half precision tensor (FP16 Tensor), 989.4 TFLOPS peak theoretical Bfloat16 tensor format precision (BF16 Tensor), 1,978.9 TFLOPS peak theoretical 8-bit precision (FP8), 1,978.9 TOPs peak theoretical INT8 floating-point performance. BFLOAT16 Tensor Core, FP16 Tensor Core, FP8 Tensor Core and INT8 Tensor Core performance were published by Nvidia using sparsity; for the purposes of comparison, AMD converted these numbers to non-sparsity/dense by dividing by 2, and these numbers appear above.
Nvidia H200 source: https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446 and https://www.anandtech.com/show/21136/nvidia-at-sc23-h200-accelerator-with-hbm3e-and-jupiter-supercomputer-for-2024
Note: Nvidia H200 GPUs have the same published FLOPs performance as H100 products https://resources.nvidia.com/en-us-tensor-core. MI325-002
- MI325-014: Based on testing completed on 10/08/2024 by AMD performance lab measuring text generated throughput for LLaMA 3.1-405B model using FP8 datatype. Test was performed using input length of 128 tokens and an output length of 2048 tokens for the following configurations of AMD Instinct™ MI325X 8xGPU platform and NVIDIA H200 HGX GPU platform. 8xGPU MI325X platform with vLLM performance Vs. NVIDIA published results Configurations: MI325X 8xGPU Platform Configuration Dell PowerEdge XE9680 with 2x Intel Xeon Platinum 8480+ Processors, 8x AMD Instinct MI325X (256GiB, 1000W) GPUs, Ubuntu 22.04, and a pre-release build of ROCm 6.3 vs Nvidia published results for TensorRT-LLM v0.13 were captured from: https://github.com/NVIDIA/TensorRT-LLM/blob/v0.13.0/docs/source/performance/perf-overview.md - 3039.7 output tokens/s. Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-014
- MI300-61: Measurements conducted by AMD AI Product Management team on AMD Instinct™ MI300X GPU for comparing large language model (LLM) performance with optimization methodologies enabled and disabled as of 9/28/2024 on Llama 3.1-70B and Llama 3.1-405B and vLLM 0.5.5.
System Configurations:
AMD EPYC 9654 96-Core Processor, 8 x AMD MI300X, ROCm™ 6.1, Linux® 7ee7e017abe3 5.15.0-116-generic #126-Ubuntu® SMP Mon Jul 1 10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux, Frequency boost: enabled. Performance may vary on factors including but not limited to different versions of configurations, vLLM, and drivers.
- MI300-62: Testing conducted by internal AMD Performance Labs as of September 29, 2024 inference performance comparison between ROCm 6.2 software and ROCm 6.0 software on the systems with 8 AMD Instinct™ MI300X GPUs coupled with Llama 3.1-8B, Llama 3.1-70B, Mixtral-8x7B, Mixtral-8x22B, and Qwen 72B models. ROCm 6.2 with vLLM 0.5.5 performance was measured against the performance with ROCm 6.0 with vLLM 0.3.3, and tests were performed across batch sizes of 1 to 256 and sequence lengths of 128 to 2048.
Configurations:
1P AMD EPYC™ 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5 TiB (24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, , ROCm 6.2.0-00, vLLM 0.5.5, PyTorch 2.4.0, Ubuntu® 22.04 LTS with Linux kernel 5.15.0-119-generic.
vs.
1P AMD EPYC 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5TiB 24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, ROCm 6.0.0-00, vLLM 0.3.3, PyTorch 2.1.1, Ubuntu 22.04 LTS with Linux kernel 5.15.0-119-generic.
Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including but not limited to different versions of configurations, vLLM, and drivers.
DISCLAIMER: The information contained herein is for informational purposes only and is subject to change without notice. While every precaution has been taken in the preparation of this document, it may contain technical inaccuracies, omissions and typographical errors, and AMD is under no obligation to update or otherwise correct this information. Advanced Micro Devices, Inc. makes no representations or warranties with respect to the accuracy or completeness of the contents of this document, and assumes no liability of any kind, including the implied warranties of noninfringement, merchantability or fitness for particular purposes, with respect to the operation or use of AMD hardware, software or other products described herein. No license, including implied or arising by estoppel, to any intellectual property rights is granted by this document. Terms and limitations applicable to the purchase or use of AMD products are as set forth in a signed agreement between the parties or in AMD's Standard Terms and Conditions of Sale. GD-18u.
© 2024 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, EPYC, Instinct, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners. Certain AMD technologies may require third-party enablement or activation. Supported features may vary by operating system. Please confirm with the system manufacturer for specific features. No technology or product can be completely secure.