Delaying modernization can be costly
When organizations consider the future of their cloud computing footprint, maintaining performance and scale while minimizing operating costs is the primary goal. This often leads them to stay with older generation instances with lower prices. But not modernizing can have its own costs, especially when all-new M7a instances can do much more with fewer instances.
Do More With Less
Powered by the latest AMD EPYC™ processors, the Amazon M7a can deliver the same workload throughput as the Intel-based M6i, using only a fraction of the instances. This can mean huge savings on computing costs.
Flexible & Optimized Instances with EPYC™
Because flexibility is critical when selecting a cloud instance, AWS offers a variety of compute instances powered by AMD EPYC™ processors that are designed for specific use cases, such as memory-intensive, compute-intensive and HPC applications. See the full list of EPYC™ Amazon EC2 instances.
M7a Performance-driven Cloud OPEX Savings vs M6i1
Modernizing your AWS deployment to M7a can mean significant savings on operating costs. When moving from M6i to M7a customers can see up to 2x performance uplift and save 37% on Cloud OPEX on average.1
Web/App Tier | ||
Enterprise Applications | Web Serving | Video Processing |
Java | NGINX | FFMPEG |
1.6x max-jOPS |
1.9x Req/Sec |
2.5x Frames/Sec |
24% Savings |
36% Savings |
52% Savings |
Data Tier | ||
SQL Databases (Transactional) | SQL Databases (Analytics) |
No SQL Databases |
MySQL | MS SQL Server | Redis |
1.7x Transactions/min |
1.7x QphH |
2.4x Req/Sec |
31% Savings |
30% Savings |
49% Savings |
M7a Performance-driven Cloud OPEX Savings vs M7i2
Organizations migrating from an on-premises data center, or another cloud provider, to AWS also benefit from selecting M7a instances. Here again, choosing AMD can deliver significant savings compared to the latest Intel-based M7i instance.
Web/App Tier | ||
Enterprise Applications | Web Serving | Video Processing |
Java | NGINX | FFMPEG |
1.4x max-jOPS |
1.6x Req/Sec |
1.9x Frames/Sec |
18% Savings |
29% Savings |
40% Savings |
Data Tier | ||
SQL Databases (Transactional) | SQL Databases (Analytics) |
No SQL Databases |
MySQL | MS SQL Server | Redis |
1.4x Transactions/min |
1.3x QphH |
2.2x Req/Sec |
18% Savings |
13% Savings |
49% Savings |
- General Purpose
- HPC Optimized
- Compute Intensive
- Memory Optimized
- Burstable General Purpose
- Graphics-Intensive
General Purpose
Balanced compute, memory, and networking resources for general-purpose workloads. Built for business-critical application servers, backend servers for enterprise applications, gaming servers, caching fleets, and app development environments.
Instance | Specifications | Generation | Key Workloads |
M7a |
|
4th Gen EPYC™ |
|
M6a |
|
3rd Gen EPYC™ |
|
M5a/5ad |
|
1st Gen EPYC™ |
|
HPC Optimized
Amazon EC2 Hpc6a Instances offer the best price-performance for compute-intensive, high-performance computing (HPC) workloads in AWS.
Instance | Specifications | Generation | Key Workloads |
Hpc7a |
|
4th Gen EPYC™ | Tightly coupled, compute-intensive high performance computing (HPC) workloads such as computational fluid dynamics (CFD), weather forecasting, and multiphysics simulations, and deep learning |
Hpc6a |
|
3rd Gen EPYC™ |
|
Compute Intensive
With frequencies running as high as 3.7 GHz*, these instances are built to run batch processing, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference, and other compute intensive applications.
Instance | Specifications | Generation | Key Workloads |
C7a |
|
4th Gen EPYC™ |
|
C6a |
|
3rd Gen EPYC™ |
|
C5a/C5ad |
|
2nd Gen EPYC™ |
|
C5an.metal /C5adn.metal |
|
2nd Gen EPYC™ |
|
Memory Optimized
Built for high performance databases, distributed web scale in-memory caches, in-memory databases, real time big data analytics, and other enterprise applications.
Instance | Specifications | Generation | Key Workloads |
R7a |
|
4th Gen EPYC™ |
|
R6a/R6ad |
|
3rd Gen EPYC™ |
|
R5a/R5ad |
|
1st Gen EPYC™ |
|
Burstable General Purpose
Baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. Built for micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications.
Instance | Specifications | Generation | Key Workloads |
T3a |
|
1st Gen EPYC™ |
|
Graphics-Intensive
Featuring both AMD EPYC™ CPUs and Radeon™ Pro GPUs, G4ad delivers hyper-efficient and high bandwidth interconnect, which helps ensure exceptional data throughput and application responsiveness for developers and engineers.
Instance | Specifications | Generation | Key Workloads |
|
|
|
|
|
|
|
|

AWS Case Studies
Footnotes
*EPYC-18: Max boost for AMD EPYC processors is the maximum frequency achievable by any single core on the processor under normal operating conditions for server systems.
- SPC5-003: M7a.4xlarge max score and Cloud OPEX savings comparison to M6i.4xlarge running six common application workloads using on-demand pricing US-East (Ohio) Linux® as of 10/9/2023.
FFmpeg: ~2.5x the raw_vp9 performance (40.2% of M6i runtime) saving ~52% in Cloud OPEX
NGINX™: ~1.9x the WRK performance (52.9% of M6i runtime) saving ~36% in Cloud OPEX
Server-side Java® multi-instance max Java OPS: ~1.6x the ops/sec performance (63.3% of M6i runtime) saving ~24% in Cloud OPEX
MySQL™: ~1.7x the TPROC-C performance (57.5% of M6i runtime) saving ~31% in Cloud OPEX
SQL Server: ~1.7x the TPROC-H performance (58.1% of M6i runtime) saving ~30% in Cloud OPEX
Redis™: ~2.4x the SET rps performance ( 42.4% of M6i runtime)saving ~49% in Cloud OPEX
Cloud performance results presented are based on the test date in the configuration. Results may vary due to changes to the underlying configuration, and other conditions such as the placement of the VM and its resources, optimizations by the cloud service provider, accessed cloud regions, co-tenants, and the types of other workloads exercised at the same time on the system.
SP5C-004: AWS M7a.4xlarge max score and Cloud OPEX savings comparison to M7i.4xlarge running six common application workloads using on-demand pricing US-East (Ohio) Linux as of 10/9/2023.
FFmpeg: ~1.9x the raw to vp9 encoding performance (52.3% of M7i runtime) saving ~40% in Cloud OPEX
NGINX™: ~1.6x the WRK performance (61.7% of M7i runtime) saving ~29% in Cloud OPEX
Server-side Java® multi-instance max Java OPS: ~1.4x the ops/sec performance (71.4% of M7i runtime) saving ~18% in Cloud OPEX
MySQL™: ~1.4x the TPROC-C performance (70.4% of M7i runtime) saving ~18% in Cloud OPEX
SQL Server®: ~1.3x the TPROC-H performance (76.0% of M7i runtime) saving ~13% in Cloud OPEX
Redis™: ~2.2x the rps performance (44.6% of M7i runtime) saving ~49% in Cloud OPEX
Cloud performance results presented are based on the test date in the configuration. Results may vary due to changes to the underlying configuration, and other conditions such as the placement of the VM and its resources, optimizations by the cloud service provider, accessed cloud regions, co-tenants, and the types of other workloads exercised at the same time on the system.
*EPYC-18: Max boost for AMD EPYC processors is the maximum frequency achievable by any single core on the processor under normal operating conditions for server systems.
- SPC5-003: M7a.4xlarge max score and Cloud OPEX savings comparison to M6i.4xlarge running six common application workloads using on-demand pricing US-East (Ohio) Linux® as of 10/9/2023.
SP5C-004: AWS M7a.4xlarge max score and Cloud OPEX savings comparison to M7i.4xlarge running six common application workloads using on-demand pricing US-East (Ohio) Linux as of 10/9/2023.
FFmpeg: ~1.9x the raw to vp9 encoding performance (52.3% of M7i runtime) saving ~40% in Cloud OPEX
NGINX™: ~1.6x the WRK performance (61.7% of M7i runtime) saving ~29% in Cloud OPEX
Server-side Java® multi-instance max Java OPS: ~1.4x the ops/sec performance (71.4% of M7i runtime) saving ~18% in Cloud OPEX
MySQL™: ~1.4x the TPROC-C performance (70.4% of M7i runtime) saving ~18% in Cloud OPEX
SQL Server®: ~1.3x the TPROC-H performance (76.0% of M7i runtime) saving ~13% in Cloud OPEX
Redis™: ~2.2x the rps performance (44.6% of M7i runtime) saving ~49% in Cloud OPEXCloud performance results presented are based on the test date in the configuration. Results may vary due to changes to the underlying configuration, and other conditions such as the placement of the VM and its resources, optimizations by the cloud service provider, accessed cloud regions, co-tenants, and the types of other workloads exercised at the same time on the system.
FFmpeg: ~2.5x the raw_vp9 performance (40.2% of M6i runtime) saving ~52% in Cloud OPEX
NGINX™: ~1.9x the WRK performance (52.9% of M6i runtime) saving ~36% in Cloud OPEX
Server-side Java® multi-instance max Java OPS: ~1.6x the ops/sec performance (63.3% of M6i runtime) saving ~24% in Cloud OPEX
MySQL™: ~1.7x the TPROC-C performance (57.5% of M6i runtime) saving ~31% in Cloud OPEX
SQL Server: ~1.7x the TPROC-H performance (58.1% of M6i runtime) saving ~30% in Cloud OPEX
Redis™: ~2.4x the SET rps performance ( 42.4% of M6i runtime)saving ~49% in Cloud OPEX
Cloud performance results presented are based on the test date in the configuration. Results may vary due to changes to the underlying configuration, and other conditions such as the placement of the VM and its resources, optimizations by the cloud service provider, accessed cloud regions, co-tenants, and the types of other workloads exercised at the same time on the system.