Find the Right AMD EPYC Server CPU

Breakthrough designs manufactured in the world’s most advanced fabs deliver the x86 performance, energy efficiency, and cost efficiency required for today’s AI-driven, data center demands. Choose from high-density CPUs for AI, high-frequency CPUs for low-latency workloads, and memory-optimized CPUs for database and simulation workloads.

AMD EPYC™ 4004 Series

Small Business / Hosted Service Providers

Designed for small business and hosted service providers, with up to 16 “Zen 5” cores for small servers.

AMD EPYC™ 7003 Series

All EPYC Server CPU Specs

Compare AMD EPYC Server CPUs, sort by features, and access complete details for every SKU.



Tools to Help You Choose the Right AMD EPYC Server CPU

AMD EPYC Server CPU Tools

Compare AMD EPYC Server CPUs and Intel Xeon CPUs, calculate potential greenhouse gas emissions, and estimate total cost of ownership.

AMD EPYC Server CPU Cloud Advisory Tools

Get advice on instance types, cost comparisons, and performance data with AMD EPYC Server CPU-based offerings from leading cloud providers.

The Competition Can’t Touch AMD EPYC Server CPU Performance

Shrink
Data centers

Up To
7x
more than Intel® Xeon®

A single AMD EPYC™ 9005 CPU-based server can do the work of more than seven 2019-era Intel® Xeon® Platinum servers.1

AMD EPYC Server CPUs Outperform Intel® Xeon® Processors

From data center consolidation to raw integer performance, 5th Generation AMD EPYC Server CPUs outmatch Intel® Xeon® processors, making them the superior choice for refreshes and new build outs.

For Hybrid Cloud, Stick With x86 on AMD EPYC Server CPU instances

Cloud instances powered by Arm® processors may look good on paper, but the actual cost and performance numbers may not add up. The expense of porting applications and managing multiple code bases — plus underwhelming real-world performance — can make adding Arm to your cloud an inadvisable risk. 

Up To
75%

More per Dollar2

than Arm-based AWS Graviton

Get up to 75% better performance per dollar than Arm-based AWS Graviton with AMD EPYC Server CPUs.2

AMD EPYC Server CPUs are the Best CPU for Enterprise AI3

In the cloud and on-premises, in large and small deployments, AMD EPYC Server CPUs offer power-competitive, cost-efficient, flexible solutions for every step of your AI journey. Compared to 6th Gen Intel Xeon 6980P, 5th Gen AMD EPYC 9965 delivers significantly more processing power.

Up to
1.33X
more throughput for language models

5th Gen AMD EPYC 9965 delivers up to 1.33x more inference throughput than the 6th Gen Intel Xeon 6980P on Llama3.1-8B translation use case4

Up to
1.93X
more throughput for machine learning workloads

5th Gen AMD EPYC 9965 delivers up to 1.93x more throughput than 6th Gen Intel Xeon 6980P on XGBoost5

Up to
1.7X
more throughput for end-to-end AI benchmarks

5th Gen AMD EPYC 9965 delivers up to 1.7x more general AI (TPCx-AI) throughput than 6th Gen Intel Xeon 6980P6

AMD EPYC Server CPUs FAQs

Yes. 5th Gen AMD EPYC Server CPUs currently have the highest core count available in x86 server processors. With 192 cores, the AMD EPYC 9965 Server CPU can support 33% more virtual CPUs (vCPUs) than the highest available core count Intel® Xeon® 6E “Sierra Forest” 144 core processor (1 core per vCPU).

When comparing 5th Gen AMD EPYC Server CPUs to Intel Xeon 6 CPUs, AMD EPYC Server CPUs offer better general computing, end-to-end AI performance, and power efficiency. In general purpose computing, AMD EPYC Server CPUs beat Xeon by up to 35%.7 For energy consumption, AMD EPYC Server CPUs are up to 66% more power efficient than Xeon.8 In end-to-end AI performance, AMD EPYC Server CPUs beat Xeon by up to 70%.9

No, it’s not difficult to migrate from Intel to AMD. Both AMD EPYC Server CPUs and Intel Xeon CPUs are built on x86 architecture. This makes it easy to migrate at the application level. To migrate virtual machines (VMs), AMD offers a VMware Architecture Migration Tool to help automate the process. 

Yes, AMD EPYC Server CPUs offer excellent performance for virtual machines (VMs), databases, and other enterprise applications. 5th Gen AMD EPYC Server CPUs are a great fit for VMs due to their high core density and high performance for cloud workloads. AMD EPYC Server CPUs bring fast speeds to database workloads to support agentic AI and analytics. And with high performance for general computing and x86 compatibility, enterprise applications excel on EPYC.

Choosing an AMD EPCY Server CPU depends on your needs for performance, power efficiency, and price. The latest EPYC family offers a wide range of choices, from 8 to 192 cores and 155W to 500W. Find the best CPU for your needs using the AMD EPYC Server CPU Tools.

The world’s leading data center hardware manufacturers, including Cisco, Dell, HPE, Lenovo, Oracle, Supermicro, and others, build systems featuring AMD EPYC Server CPUs. AMD works closely with our OEM partners, software vendors, sellers, and the open-source community to deliver cutting-edge solutions.

You can choose cloud instances based on AMD EPYC Server CPUs through your preferred cloud provider, including AWS, Microsoft Azure, Google Cloud, Oracle Cloud, and others. AMD collaborates with cloud providers to ensure that AMD EYPC Server CPU-based virtual machines (VMs) deliver excellent performance and cost efficiency.

AMD EPYC Solutions for Core Data Center Workloads

AMD EPYC Server CPU Solutions for Key Industries

AMD EPYC Server CPUs in Action

AMD EPYC Deployment Options

Close-up of a server

Broad Ecosystem Support for On-Premises 

Find leading-edge enterprise hardware from our OEM partners. Count on seamless integration with a full, mature portfolio of hardware and software, including all-purpose CPUs, a premier line of GPUs for AI, and interoperable networking solutions.

Mother Board CPU

Maximize the Value of Your Cloud

Choose AMD-based virtual machines (VMs) to achieve performance and OpEx advantages in your preferred cloud.

Resources

Connect with AMD

Sign up for AMD news and announcements including upcoming events and webinars.

Ask for an AMD EPYC Server CPU sales expert to contact you.

Footnotes
  1. This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool - version 1.3, compares the selected AMD EPYC™ and Intel® Xeon® CPU based server solutions required to deliver a TOTAL_PERFORMANCE of 391000 units of SPECrate®2017_int_base performance as of November 21, 2024. This estimation compares upgrading from a legacy 2P Intel Xeon 28 core Platinum_8280 based server with a score of 391 (https://spec.org/cpu2017/results/res2020q3/cpu2017-20200915-23984.pdf) versus 2P EPYC 9965 (192C) powered server with a score of 3100 (https://spec.org/cpu2017/results/res2024q4/cpu2017-20241004-44979.pdf). Environmental impact estimates made leveraging this data, using the Country / Region specific electricity factors from Country Specific Electricity Factors - 2024, and the United States Environmental Protection Agency Greenhouse Gas Equivalencies Calculator. For additional details, see https://www.amd.com/en/legal/claims/epyc.html#q=9xx5TCO-005. (9xx5TCO-005)
  2.  Phoronix, “AWS Graviton4 Benchmarks Prove to Deliver the Best ARM Cloud Server Performance ,” page 7, July 12, 2024. Performance per dollar calculated as geometric mean performance divided by total cost to complete workloads. On-demand pricing shown is for general-purpose cloud compute instances in the US East region, based on rates from July 2024 and last checked in June 2025. No changes were observed during this period. Pricing may change at any time.
  3. Comparison based on thread density, performance, features, process technology and built-in security features of currently shipping servers as of 10/10/2024. EPYC 9005 series CPUs offer the highest thread density, leads the industry with 500+ performance world records including world record enterprise leadership Java®️ ops/sec performance, top HPC leadership with floating-point throughput performance, AI end-to-end performance with TPCx-AI performance and highest energy efficiency scores. Compared to 5th Gen Xeon, the 5th Gen EPYC series also has more DDR5 memory channels with more memory bandwidth and supports more PCIe® Gen5 lanes for I/O throughput, and has up to 5x the L3 cache/core for faster data access. The EPYC 9005 series uses advanced 3-4nm technology, and offers Secure Memory Encryption + Secure Encrypted Virtualization (SEV) + SEV Encrypted State + SEV-Secure Nested Paging security features. (EPYC-029D)
  4. 9xx5-156: Llama3.1-8B throughput results based on AMD internal testing as of 04/08/2025. Llama3.1-8B configurations: BF16, batch size 32, 32C Instances, Use Case Input/Output token configurations: [Summary = 1024/128, Chatbot = 128/128, Translate = 1024/1024, Essay = 128/1024]. 2P AMD EPYC 9965 (384 Total Cores), 1.5TB 24x64GB DDR5-6400, 1.0 Gbps NIC, 3.84 TB Samsung MZWLO3T8HCLS-00A07, Ubuntu® 22.04.5 LTS, Linux 6.9.0-060900-generic, BIOS RVOT1004A, (SMT=off, mitigations=on, Determinism=Power), NPS=1, ZenDNN 5.0.1 2P AMD EPYC 9755 (256 Total Cores), 1.5TB 24x64GB DDR5-6400, 1.0 Gbps NIC, 3.84 TB Samsung MZWLO3T8HCLS-00A07, Ubuntu® 22.04.4 LTS, Linux 6.8.0-52-generic, BIOS RVOT1004A, (SMT=off, mitigations=on, Determinism=Power), NPS=1, ZenDNN 5.0.1 2P Xeon 6980P (256 Total Cores), AMX On, 1.5TB 24x64GB DDR5-8800 MRDIMM, 1.0 Gbps Ethernet Controller X710 for 10GBASE-T, Micron_7450_MTFDKBG1T9TFR 2TB, Ubuntu 22.04.1 LTS Linux 6.8.0-52-generic, BIOS 1.0 (SMT=off, mitigations=on Performance Bias), IPEX 2.6.0 Results: CPU 6980P 9755 9965 Summary 1 n/a1.093 Translate 1 1.062 1.334 Essay 1 n/a 1.14 Results may vary due to factors including system configurations, software versions, and BIOS settings.
  5. 9xx5-162: XGBoost (Runs/Hour) throughput results based on AMD internal testing as of 04/08/2025. XGBoost Configurations: v1.7.2, Higgs Data Set, 32 Core Instances, FP32 2P AMD EPYC 9965 (384 Total Cores), 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1.0 Gbps NIC, 3.84 TB Samsung MZWLO3T8HCLS-00A07, Ubuntu® 22.04.5 LTS, Linux 5.15 kernel, BIOS RVOT1004A,  (SMT=off, mitigations=on, Determinism=Power), NPS=1 2P AMD EPYC 9755 (256 Total Cores), 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1.0 Gbps NIC, 3.84 TB Samsung MZWLO3T8HCLS-00A07, Ubuntu® 22.04.4 LTS, Linux 5.15 kernel, BIOS RVOT1004A,  (SMT=off, mitigations=on, Determinism=Power), NPS=1 2P Xeon 6980P (256 Total Cores), 1.5TB 24x64GB DDR5-8800 MRDIMM, 1.0 Gbps Ethernet Controller X710 for 10GBASE-T, Micron_7450_MTFDKBG1T9TFR 2TB, Ubuntu 22.04.1 LTS Linux 6.8.0-52-generic, BIOS 1.0      (SMT=off, mitigations=on, Performance Bias) Results: CPU Throughput Relative 2P 6980P 400 1 2P 9755 436 1.090 2P 9965 771 1.928 Results may vary due to factors including system configurations, software versions and BIOS settings.
  6.  9xx5-151: TPCxAI @SF30 Multi-Instance, 32C Instance Size throughput results based on AMD internal testing as of 04/01/2025 running multiple VM instances. The aggregate end-to-end AI throughput test is derived from the TPCx-AI benchmark and as such is not comparable to published TPCx-AI results, as the end-to-end AI throughput test results do not comply with the TPCx-AI Specification. 2P   AMD EPYC 9965 (6067.53 Total AIUCpm, 384 Total Cores, 500W TDP, AMD reference system, 1.5TB 24x64GB DDR5-6400, 2 x 40 GbE Mellanox CX-7 (MT2910), 3.84TB Samsung MZWLO3T8HCLS-00A07 NVMe, Ubuntu® 24.04 LTS kernel 6.13, SMT=ON, Determinism=power, Mitigations=on) 2P AMD EPYC 9755 (4073.42 Total AIUCpm, 256 Total Cores, 500W TDP, AMD reference system, 1.5TB 24x64GB DDR5-6400, 2 x 40 GbE Mellanox CX-7 (MT2910) 3.84TB Samsung MZWLO3T8HCLS-00A07 NVMe, Ubuntu 24.04 LTS kernel 6.13, SMT=ON, Determinism=power, Mitigations=on) 2P Intel Xeon 6980P (3550.50 Total AIUCpm, 256 Total Cores, 500W TDP, Production system, 1.5TB 24x64GB DDR5-6400, 4 x 1GbE Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe 3.84TB SAMSUNG MZWLO3T8HCLS-00A07 NVMe, Ubuntu 24.04 LTS kernel 6.13, SMT=ON, Performance Bias, Mitigations=on) Results may vary based on factors including but not limited to system configurations, software versions, and BIOS settings. TPC, TPC Benchmark, and TPC-H are trademarks of the Transaction Processing Performance Council.
  7. 9xx5-129: SPECrate®2017_int_base with GCC13 comparison based on AMD internal testing as of 4/1/2025. 2P AMD EPYC 9965 (est, 2160 SPECrate®2017_int_base, 384 Total Cores, 500W TDP, AMD reference system, 1.5TB 24x64GB DDR5-6400, 2 x 40 GbE Mellanox CX-7 (MT2910), 3.84TB Samsung MZWLO3T8HCLS-00A07 NVMe, Ubuntu® 22.04.3 LTS | 5.15.0-105-generic, SMT=ON, Determinism=power, Mitigations=on) 2P AMD EPYC 9755 (est. 1850 SPECrate®2017_int_base, 256 Total Cores, 500W TDP, AMD reference system, 1.5TB 24x64GB DDR5-6400, 2 x 40 GbE Mellanox CX-7 (MT2910) 3.84TB Samsung MZWLO3T8HCLS-00A07 NVMe Ubuntu 22.04.3 LTS | 5.15.0-105-generic, SMT=ON, Determinism=power, Mitigations=on) 2P Intel Xeon 6980P (est. 1600 SPECrate®2017_int_base, 256 Total Cores, 500W TDP, Production system, 1.5TB 24x64GB DDR5-6400, 4 x 1GbE Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe 3.84TB SAMSUNG MZWLO3T8HCLS-00A07 NVMe, SUSE Linux Enterprise Server 15 SP6 kernel 6.4.0-150600.23.33-default, SMT=ON, Performance Bias, Mitigations=on) The same Intel Xeon 6980P with 1.5TB 24x64GB MRDIMM at 8800MT/s, 1650 SPECrate®2017_int_base SPEC®, SPEC CPU®, and SPECrate® are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org for more information. Intel CPU TDP at https://ark.intel.com/ as of 4/17/2025
  8. 9xx5-134: SPECpower_ssj® 2008 comparison based on published scores from www.spec.org as of 4/30/2025. 2P AMD EPYC 9965 (35920 ssj_ops/watt, 384 Total Cores, https://spec.org/power_ssj2008/results/res2024q4/power_ssj2008-20241007-01464.html) 2P AMD EPYC 9755 (29950 ssj_ops/watt, 256 Total Cores, https://spec.org/power_ssj2008/results/res2024q4/power_ssj2008-20240924-01460.html) 2P Intel Xeon 6980P (21679 ssj_ops/watt, 256 Total Cores,  https://spec.org/power_ssj2008/results/res2025q2/power_ssj2008-20250324-01511.html) SPEC®, SPEC CPU®, and SPECpower® are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org for more information.
  9. 9xx5-151: TPCxAI @SF30 Multi-Instance, 32C Instance Size throughput results based on AMD internal testing as of 04/01/2025 running multiple VM instances. The aggregate end-to-end AI throughput test is derived from the TPCx-AI benchmark and as such is not comparable to published TPCx-AI results, as the end-to-end AI throughput test results do not comply with the TPCx-AI Specification. 2P  AMD EPYC 9965 (6067.53 Total AIUCpm, 384 Total Cores, 500W TDP, AMD reference system, 1.5TB 24x64GB DDR5-6400, 2 x 40 GbE Mellanox CX-7 (MT2910), 3.84TB Samsung MZWLO3T8HCLS-00A07 NVMe, Ubuntu® 24.04 LTS kernel 6.13, SMT=ON, Determinism=power, Mitigations=on) 2P AMD EPYC 9755 (4073.42 Total AIUCpm, 256 Total Cores, 500W TDP, AMD reference system, 1.5TB 24x64GB DDR5-6400, 2 x 40 GbE Mellanox CX-7 (MT2910) 3.84TB Samsung MZWLO3T8HCLS-00A07 NVMe, Ubuntu 24.04 LTS kernel 6.13, SMT=ON, Determinism=power, Mitigations=on) 2P Intel Xeon 6980P (3550.50 Total AIUCpm, 256 Total Cores, 500W TDP, Production system, 1.5TB 24x64GB DDR5-6400, 4 x 1GbE Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe 3.84TB SAMSUNG MZWLO3T8HCLS-00A07 NVMe, Ubuntu 24.04 LTS kernel 6.13, SMT=ON, Performance Bias, Mitigations=on) Results may vary based on factors including but not limited to system configurations, software versions, and BIOS settings. TPC, TPC Benchmark, and TPC-H are trademarks of the Transaction Processing Performance Council.