Optimize Your Cloud Computing Footprint and Save on Cloud OPEX

Delaying modernization can be costly

When organizations consider the future of their cloud computing footprint, maintaining performance and scale while minimizing operating costs is the primary goal. This often leads them to stay with older generation instances with lower prices. But not modernizing can have its own costs, especially when all-new M7a instances can do much more with fewer instances.

Do More With Less

Powered by the latest AMD EPYC™ processors, the Amazon M7a can deliver the same workload throughput as the Intel-based M6i, using only a fraction of the instances. This can mean huge savings on computing costs.

Flexible & Optimized Instances with EPYC™

Because flexibility is critical when selecting a cloud instance, AWS offers a variety of compute instances powered by AMD EPYC™ processors that are designed for specific use cases, such as memory-intensive, compute-intensive and HPC applications. See the full list of EPYC™ Amazon EC2 instances.

How Much Can M7a Lower Your Cloud Computing Costs?

M7a Performance-driven Cloud OPEX Savings vs M6i1

Modernizing your AWS deployment to M7a can mean significant savings on operating costs. When moving from M6i to M7a customers can see up to 2x performance uplift and save 37% on Cloud OPEX on average.1

Web/App Tier
Enterprise Applications Web Serving Video Processing
Java NGINX FFMPEG
1.6x
max-jOPS
1.9x
Req/Sec
2.5x
Frames/Sec
24%
Savings
36%
Savings
52%
Savings
Data Tier
SQL Databases (Transactional) SQL Databases
(Analytics)
No SQL Databases
MySQL MS SQL Server Redis
1.7x
Transactions/min
1.7x
QphH
2.4x
Req/Sec
31%
Savings
30%
Savings
49%
Savings
M7a Performance-driven Cloud OPEX Savings vs M7i2

Organizations migrating from an on-premises data center, or another cloud provider, to AWS also benefit from selecting M7a instances. Here again, choosing AMD can deliver significant savings compared to the latest Intel-based M7i instance.

Web/App Tier
Enterprise Applications Web Serving Video Processing
Java NGINX FFMPEG
1.4x
max-jOPS
1.6x
Req/Sec
1.9x
Frames/Sec
18%
Savings
29%
Savings
40%
Savings
Data Tier
SQL Databases (Transactional) SQL Databases
(Analytics)
No SQL Databases
MySQL MS SQL Server Redis
1.4x
Transactions/min
1.3x
QphH
2.2x
Req/Sec
18%
Savings
13%
Savings
49%
Savings

Instances

Turn your cloud environment into a competitive advantage with AMD and AWS – setting superior standards for performance and scalability for your most demanding workloads.

Find the right instance that fits your workload needs.

General Purpose

Balanced compute, memory, and networking resources for general-purpose workloads. Built for business-critical application servers, backend servers for enterprise applications, gaming servers, caching fleets, and app development environments.

Learn More 

Instance Specifications Generation Key Workloads
M7a
  • 4:1 GB memory per vCPU
  • Up to 3.7GHz*
  • Up to 192 vCPUs
  • Up to 50Gbps network bandwidth
  • Up to 40 Gbps EBS bandwidth
4th Gen EPYC™
  • SAP-Certified 
  • Financial applications 
  • Application servers 
  • Simulation modeling 
  • Gaming 
  • Mid-size data stores 
  • App development sites 
  • Caching fleets
M6a
  • 4:1 GiB memory per vCPU
  • Up to 3.6 GHz*
  • Up to 192 vCPUs
  • Up to 50Gbps network bandwidth
3rd Gen EPYC™
  • SAP-Certified 
  • Backend servers supporting enterprise applications
  • Multi-player gaming servers Caching fleets 
  • Application development environments
M5a/5ad
  • 4:1 GiB memory per vCPU
  • Up to 2.5 GHz*
  • Up to 20 Gbps network bandwidth
1st Gen EPYC™
  • Web/apps server 
  • Enterprise apps 
  • Dev/test environment
  • Ex. Apache Cassandra®, NGINX®
HPC Optimized

Amazon EC2 Hpc6a Instances offer the best price-performance for compute-intensive, high-performance computing (HPC) workloads in AWS.

Learn More  

Instance Specifications Generation Key Workloads
Hpc7a
  • 4:1 GB memory per vCPU
  • Up to 3.7GHz* 
  • Up to 192 cores
  • Up to 25 Gbps network bandwidth
  • Up to 300 Gbps EFA network bandwidth
4th Gen EPYC™ Tightly coupled, compute-intensive high performance computing (HPC) workloads such as computational fluid dynamics (CFD), weather forecasting, and multiphysics simulations, and deep learning
Hpc6a
  • Up to 3.6GHz*
  • 96 cores
  • 384GB of RAM
  • Up to 100 Gbps EFA
3rd Gen EPYC™
  • Computational fluid dynamics
  • Weather forecasting
  • Molecular dynamics
Compute Intensive

With frequencies running as high as 3.7 GHz*, these instances are built to run batch processing, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference, and other compute intensive applications.

Learn More  

Instance Specifications Generation Key Workloads
C7a
  • 2:1 GiB memory per vCPU
  • Up to 3.7GHz 
  • Up to 192 vCPU
  • Up to 50 Gbps network bandwidth
  • Up to 40 Gbps EBS bandwidth
4th Gen EPYC™
  • High-performance web servers Batch processing 
  • Ad serving 
  • Machine learning 
  • Multiplayer gaming 
  • Video encoding and transcoding
  • Scientific modeling
C6a
  • 2:1 GiB memory per vCPU
  • Up to 3.6 GHz*
  • Up to 192 vCPU
  • Up to 50 Gbps network bandwidth
3rd Gen EPYC™
  • Batch processing 
  • Distributed analytics 
  • High performance computing (HPC) 
  • Ad serving 
  • Highly-scalable multiplayer gaming  
  • Video encoding
C5a/C5ad
  • 2:1 GiB memory per vCPU
  • Up to 3.3 GHz*
  • Up to 20 Gbps network bandwidth
2nd Gen EPYC™
  • Batch processing,
  • Ddatabase,
  • Vvideo encoding,
  • Aanalytics
  • Ex. Microsoft SQL Server®; MySQL™, Redis™
C5an.metal /C5adn.metal
  • Up to 3.3 GHz*
  • Up to 100 Gbps EFA
  • Up to 192 vCPU / 384 GiB
  • Up to 7.6 TB of fast, local NVMe instance storage
2nd Gen EPYC™
  • HPC scalable workloads
  • Ex. Genomics, Oil & Gas Simulation
Memory Optimized

Built for high performance databases, distributed web scale in-memory caches, in-memory databases, real time big data analytics, and other enterprise applications.

Learn More 

Instance Specifications Generation Key Workloads
R7a
  • 8:1 GiB memory per vCPU
  • Up to 3.7GHz 
  • Up to 192 vCPUs
  • Up 25 Gbps network bandwidth
  • Up to 40 Gbps EBS bandwidth
4th Gen EPYC™
  • SAP Certified 
  • SQL and NoSQL 
  • In-memory databases 
  • Real-time big data analytics Electronic design automation Distributed web scale in-memory caches
R6a/R6ad
  • 8:1 GiB memory per vCPU
  • Up to 3.6GHz
  • Up to 192 vCPUs 
  • Up to 50Gbps network BW
  • Up to 40Gbps EBS bandwidth 
3rd Gen EPYC™
  • High-performance databases (relational databases, NoSQL databases) 
  • Distributed web scale in-memory caches (such as Memcached, Redis) 
  • In-memory databases such as real-time big data analytics (such as Hadoop, Spark clusters)
R5a/R5ad
  • 8:1 GiB memory per vCPU
  • Up to 2.5 GHz*
  • Up to 20 Gbps network
1st Gen EPYC™
  • In-memory cache 
  • High-performance DB 
  • Big Data analysis
  • Ex. Cloudera®, Memcached, MariaDB®
Burstable General Purpose

Baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. Built for micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications.

Learn More 

Instance Specifications Generation Key Workloads
T3a
  • Up to 4:1 GiB memory per vCPU
  • Up to 2.5 GHz*
  • Unlimited CPU burst  
1st Gen EPYC™
  • Microservices 
  • Virtual desktops 
  • Code repositories
Graphics-Intensive

Featuring both AMD EPYC™ CPUs and Radeon™ Pro GPUs, G4ad delivers hyper-efficient and high bandwidth interconnect, which helps ensure exceptional data throughput and application responsiveness for developers and engineers.

Learn More 

Instance Specifications Generation Key Workloads
  • Single GPU VMs
  • g4ad.xlarge
  • g4ad.2xlarge
  • g4ad.4xlarge
  • 1 GPU
  • Up to 16 vCPUs
  • Up to 600GB Storage
  • Up to 2.4 TB local NVMe-based SSD storage
  • 2nd Gen AMD EPYC™ CPUs
  • AMD Radeon Pro V520 GPUs
  • Design Engineering for CAD & AEC
  • Multi GPU VMs
  • g4ad.8xlarge
  • g4ad.16xlarge
  • Up to 4 GPUs
  • Up to 64 vCPUs
  • Up to 2400GB Storage
  • Up to 2.4 TB local NVMe-based SSD storage
  • 2nd Gen AMD EPYC™ CPUs
  • AMD Radeon Pro V520 GPUs
  • Animation & Rendering (8xlarge)
  • Game design & advanced simulation (16xlarge)

AWS Case Studies

Resources

Newsletter and Request Contact

Subscribe to Data Center Insights from AMD

Request Contact from an AMD EPYC™ and AWS Sales Expert

Footnotes

*EPYC-18: Max boost for AMD EPYC processors is the maximum frequency achievable by any single core on the processor under normal operating conditions for server systems.

  1. SPC5-003: M7a.4xlarge max score and Cloud OPEX savings comparison to M6i.4xlarge running six common application workloads using on-demand pricing US-East (Ohio) Linux® as of 10/9/2023.
  2. FFmpeg: ~2.5x the raw_vp9 performance (40.2% of M6i runtime) saving ~52% in Cloud OPEX
    NGINX™: ~1.9x the WRK performance (52.9% of M6i runtime) saving ~36% in Cloud OPEX
    Server-side Java® multi-instance max Java OPS: ~1.6x the ops/sec performance (63.3% of M6i runtime) saving ~24% in Cloud OPEX
    MySQL™: ~1.7x the TPROC-C performance (57.5% of M6i runtime) saving ~31% in Cloud OPEX
    SQL Server: ~1.7x the TPROC-H performance (58.1% of M6i runtime) saving ~30% in Cloud OPEX
    Redis™: ~2.4x the SET rps performance ( 42.4% of M6i runtime)saving ~49% in Cloud OPEX

    Cloud performance results presented are based on the test date in the configuration. Results may vary due to changes to the underlying configuration, and other conditions such as the placement of the VM and its resources, optimizations by the cloud service provider, accessed cloud regions, co-tenants, and the types of other workloads exercised at the same time on the system.

  3. SP5C-004: AWS M7a.4xlarge max score and Cloud OPEX savings comparison to M7i.4xlarge running six common application workloads using on-demand pricing US-East (Ohio) Linux as of 10/9/2023.

    FFmpeg: ~1.9x the raw to vp9 encoding performance (52.3% of M7i runtime) saving ~40% in Cloud OPEX
    NGINX™: ~1.6x the WRK performance (61.7% of M7i runtime) saving ~29% in Cloud OPEX
    Server-side Java® multi-instance max Java OPS: ~1.4x the ops/sec performance (71.4% of M7i runtime) saving ~18% in Cloud OPEX
    MySQL™: ~1.4x the TPROC-C performance (70.4% of M7i runtime) saving ~18% in Cloud OPEX
    SQL Server®: ~1.3x the TPROC-H performance (76.0% of M7i runtime) saving ~13% in Cloud OPEX
    Redis™: ~2.2x the rps performance (44.6% of M7i runtime) saving ~49% in Cloud OPEX

    Cloud performance results presented are based on the test date in the configuration. Results may vary due to changes to the underlying configuration, and other conditions such as the placement of the VM and its resources, optimizations by the cloud service provider, accessed cloud regions, co-tenants, and the types of other workloads exercised at the same time on the system.