
Automotive
Computer vision models help propel self-driving cars and also help recognize signage, pedestrians, and other vehicles to be avoided. Natural-language processing models can help recognize spoken commands to in-car telematics.
AMD EPYC™ 9005 processor-based servers and cloud instances enable fast, efficient AI inference close to your Enterprise data, driving transformative business performance.
AI Inference uses a trained AI model to make predictions on new data. AMD offers a range of solutions for AI inference depending on your model size and application requirements. AMD EPYC™ processors excel for small to medium AI models and workloads where proximity to data matters. For batch or offline processing applications where latency is not critical AMD EPYC processors offer a cost-effective inference solution.
These are just some of the AI workloads that perform well on AMD EPYC processors are below. To dive deeper into each type of workload, read this article with the details.
Type of System |
Examples |
Rationale |
Recommendation Systems |
|
|
Machine Learning |
|
|
Type of System |
Examples |
Rationale |
Natural Language Processing |
|
|
Mixed, AI-enabled Applications |
|
|
Type of System |
Examples |
Rationale |
Generative AI |
|
|
Large Language Models |
|
|
AI models integrated within computer vision, natural language processing, and recommendation systems have significantly impacted businesses across multiple industries. These models help companies recognize objects, classify anomalies, understand written and spoken words, and make recommendations. By accelerating the development of these models, businesses can reap the benefits, regardless of their industry.
Whether deployed as CPU only or used as a host for GPUs executing larger models, AMD EPYC™ 9005 Series processors are designed with the latest open standard technologies to accelerate Enterprise AI inference workloads.
Up to 192 AMD “Zen 5” Cores: with full 512b wide data path for AVX-512 instruction support deliver great parallelism for AI inference workloads, reducing the need for GPU acceleration.
Designed for Concurrent AI and Traditional Workloads: 5th Gen AMD EPYC processors provide the highest integer performance for traditional workloads.1 AMD EPYC processors deliver efficient inference across a variety of AI workloads and model sizes.
Fast Processing and I/O: 37% generational increase for AI workloads in instructions per clock cycle (IPC).2 DDR5 memory and PCIe® Gen 5 I/O for fast data processing.
Framework Support: AMD supports the most popular AI frameworks, including TensorFlow, PyTorch, and ONNX Runtime, covering diverse use cases like image classification and recommendation engines.
Open Source and Compatibility: Optimizations are integrated into popular frameworks offering broad compatibility and open-source upstream friendliness. Plus, AMD is working with Hugging Face to enable their open-source models out of the box with ZenDNN.
ZenDNN Plug-in: These plug-ins accelerate AI inference workloads by optimizing operators, leveraging microkernels, and implementing efficient multithreading on AMD EPYC cores.
As the use of digitization, cloud computing, AI, and other emerging technologies fuel the growth of data, the need for advanced security measures becomes even more pressing. This heightened need for security is further amplified by the increased global emphasis on privacy regulations and severe penalties for breaches, highlighting the unparalleled value of data amid rising security risks.
Built-in at the silicon level, AMD Infinity Guard offers the advanced capabilities required to defend against internal and external threats and help keep your data safe.3
AMD EPYC™ 9005 processor-based servers and cloud instances enable fast, efficient AI-enabled solutions close to your customers and data.