AMD Research impacts AMD’s roadmap, products, and software, generating new innovations, publications, and patents in the process. The tech industry changes fast, and AMD Research’s technical focus areas adapt with the times. AMD Research’s current activities focus on (but are not limited to) the following key technical themes described below. To learn more about opportunities to make your own impact at AMD Research, please see our Careers page.

server room

High Performance Computing

High-Performance Computing (HPC) has traditionally been synonymous with scientific computing and supercomputers, but we are seeing high-performance capabilities rapidly spread into many other areas. This includes data-center CPUs and GPUs for Cloud Computing and Machine Learning, our industry-disrupting Ryzen™ Threadripper™ CPUs for high-end desktops, our multiple next-generation gaming consoles, and powering what are expected to be two of the world’s fastest HPC supercomputers (Frontier and El Capitan as of this writing (2020)). AMD Research’s HPC activities continue to push the fundamental technologies required to keep scaling performance for all types of AMD products. HPC research focus areas include:

  • GPUs: We are innovating technologies to improve GPU compute in terms of novel hardware, new levels of efficiency, and increasing the breadth of applicability across workloads via hardware-software co-design.
  • CPUs: The drive for greater heterogeneity and acceleration increases the impact of Amdahl’s Law, and therefore we continue to push on CPU and SOC microarchitecture to help ensure that GPUs and other accelerators can realize their full potentials.
  • Software: HPC is not only about building faster hardware, but also about making sure that software can utilize the hardware’s capabilities; this includes work on applications, frameworks, compilers, runtimes, programming models, and more.
  • Supercomputing: AMD Research has been working closely with the U.S. Government on advanced research for exascale technologies. Work continues on preparing for HPC supercomputers such as Frontier and El Capitan, as well as performing research beyond exascale.
computer memory

Advanced Memory Technologies

Innovative research in memory systems is a critical component for our future products as we continue the effort to overcome the infamous “Memory Wall.” AMD has a long history in memory system innovation, including being the first company to introduce 64-bit addressing for x86 processors, the first to integrate memory controllers into x86 processors, and the first to ship HBM memory in volume products. Today, AMD continues to pioneer new directions in memory technologies through our leading-edge research in multiple areas:

  • HPC Memory Systems: AMD Research conducted significant research on advanced memory systems for high end supercomputers, which contributed to AMD becoming the winning supplier of CPUs and GPUs for the future Frontier and El Capitan exascale systems. We continue these efforts for post-exascale HPC systems as well as data centers, cloud computing, and machine intelligence.
  • Processing-in-Memory: To reduce the energy associated with data movement, we are exploring novel processing-in-memory solutions that offload computation to off-chip memory devices, as well as the integration of data compression and novel memory technologies to increase effective on-chip storage density.
  • Multi-level Intelligent Memory: We are creating intelligent memory system designs that leverage emerging memory technologies to deliver cost-efficient and energy-efficient solutions with unprecedented memory capacities at high-performance and reliability.
  • Next-generation Memory Standards: AMD Research is continually working with memory vendors to define next generation standards. Today, AMD is shipping HBM2 in our products, and we are working closely with memory vendors to define future generations of 3D stacked memory standards.
digital brain image

Machine Intelligence

Machine Learning (ML) is impacting everything from the world’s largest supercomputers to tiny embedded devices and is one of the key drivers behind expanding capabilities in every form of computation. AMD’s CPUs, GPUs, accelerators, and APUs offer the computation capability and the flexibility required for various deployments of ML.

The focus of the Machine Intelligence research thrust is to make AMD the desired platform for ML everywhere. Our research projects are investigating potential solutions from a variety of directions:

  • Hardware: ML acceleration, through advancements in microarchitecture to system architecture and memory systems
  • Software: Programming models and tools to ease deployment of ML algorithms onto future AMD platforms
  • ML for Science: Augment and enhance traditional HPC applications with machine learning to accelerate scientific discovery
  • ML for Silicon Design: Using ML to uncover optimal parameters and configurations in future AMD designs
lightbulb on motherboard

Low Power

Improving energy efficiency contributes to reductions in total cost of ownership (TCO) in datacenters, extended battery life for laptops and embedded systems, reduction of thermal hotspots leading to more efficient cooling, and high peak computation. At AMD Research, we focus on improving the energy efficiency for the entire range of AMD SoCs, including CPUs and GPUs ranging from datacenter servers to clients and mobile devices. Our research strives to achieve aggressive energy efficiency goals by leveraging a multi-faceted approach:

  • Advanced Power Control: To reduce per-application power consumption, we are investigating cutting-edge extensions to AMD’s adaptive voltage frequency scaling (AVFS) technology that will allow future power distribution networks to more rapidly adjust to on-demand power needs. 
  • Low-Voltage Design: We are discovering new avenues of power savings through exciting research in low-voltage on-chip storage, adaptive signal encoding, asynchronous communication, and near-threshold computing.
  • Energy Efficient System Architectures: We are continuously exploring advanced power/performance optimizations for current and future core micro-architectures, SoCs, and HPC systems. These range from minimizing data movement in the system/SoC/core to fine-grained power-saving features in CPU blocks.
  • Machine-Learning Driven Design: We are exploring new exciting avenues for low-power design driven by the recent proliferation of machine intelligence.