The AMD open AI ecosystem powers every stage of your journey—from cloud to edge. At Advancing AI 2025, AMD unveiled major innovations like Instinct™ M…
As organizations rethink infrastructure virtualization, many are facing steep licensing changes and limited flexibility. These shifts are prompting IT…
AMD Spartan™ UltraScale+™ FPGAs bring high I/O, low power, and state-of-the-art security features for cost-sensitive edge applications. Now available …
A step by step guide to adapting LLMs to new languages via continued pretraining, with Poro 2 boosting Finnish performance using Llama 3.1 and AMD GPU…
What are the key benefits of running AI inference on a CPU? How can CPUs be utilized for AI? Are GPUs truly required? Join us as we explore CPU infere…
In this blog, we will discuss the strength of AMD NPU/iGPU Hybrid Optimized solution in both prefill and token generation for AI model inference and s…
Learn about Instella-Long: AMD’s open 3B language model supporting 128K context, trained on MI300X GPUs, outperforming peers on long-context benchmark…