Powering Scale-out AI Infrastructure

The AMD PensandoTM Pollara 400 AI NIC is engineered to accelerate applications running across AI nodes in mega-scale and giga-scale data centers, achieving up to 400 Gigabit per second (Gbps) Ethernet speeds.

Built on the proven third generation, fully hardware programmable Pensando P4 engine, the AMD Pensando Pollara 400 AI NIC delivers leadership performance with the flexibility to be programmed to meet future requirements, helping to maximize infrastructure investments for Hyperscalers, enterprises, cloud service providers, and researchers. 

Ultra Ethernet Consortium logo

Industry’s First AI NIC Supporting Ultra Ethernet Consortium (UEC) Features

The AMD Pensando™ Pollara 400 AI NIC is the industry's first Ultra Ethernet Consortium (UEC)ready AI NIC. With its programmability, the NIC will enable customers to select UEC features to bring intelligence to network monitoring and performance tuning. Through the fully programmable P4 engine, the NIC allows customers to upgrade any form-factor AMD Pensando Pollara 400 AI NIC to address new industry standards, as they evolve.

Open Compute Project white logo

Bringing Ethernet Designed for AI to Open Compute Data Centers

The AMD Pensando™ Pollara 400 AI NIC is available in an Open Compute Project® (OCP®) standard OCP-3.0 form factor, enabling seamless integration with OCP-based servers and networks. By aligning with OCP standards, the NIC allows data centers to deploy a fully programmable 400 Gbps Ethernet interface across industry-standard OCP systems, unlocking exceptional interoperability, rapid scalability, and cost efficiency. The OCP-compatible AMD Pensando Pollara 400 AI NIC leverages a programmable P4 engine and advanced RDMA features, helping customers prepare infrastructure for future builds and accelerate AI workloads, while meeting open industry standards for hardware design and serviceability.

AMD Pensando™ Pollara 400 AI NIC In the Spotlight

The Critical Role of NIC Programmability in Scaling Out Data Center Networks for AI

Infrastructure buildouts are underway for hosting AI workloads. For effective scale-out, networks play a critical role, and those networks are leaning toward ethernet. But effective networking isn’t just about switches–building advanced functionality into network interface cards is an essential design strategy. Jim Frey, Principal Analyst of Enterprise Networking at Enterprise Strategy Group by TechTarget shares his perspective on why he thinks the AMD programmable NICs represent an optimized path to success.

Accelerate AI Performance at Scale

AI Workload Performance

With up to 400 Gbps GPU-GPU communication speeds, the AMD Pensando™ Pollara 400 AI NIC can accelerate job completion times during training on the largest AI models, deploying the next Gen AI model, or researching cutting-edge advancements with networking designed to accelerate AI workloads.

Cost Effective

Designed to meet the needs of AI workloads today and tomorrow, the AMD Pensando™ Pollara 400 AI NIC is compatible with an open ecosystem, allowing customers to lower capex, while remaining flexible to future infrastructure scalability.

Intelligent Network Monitoring

Save time on traditional network monitoring and performance tuning tasks. The AMD Pensando™ Pollara 400 AI NIC load balances networks while monitoring network metrics, allowing teams to proactively identify and address potential network issues before they escalate into critical disruptions.

Boost AI Performance and Network Reliability

Up to
25% Improved Performance 1

Achieve up to 25% improvement in RCCL performance, significantly boosting multi-GPU and scale-out network efficiency. With advanced collective communication optimizations, intelligent load balancing, and resilient failover mechanisms, accelerate AI workloads while maximizing infrastructure utilization and scaling capabilities.

Up to
15% Reduction in AI Job Runtime 2

Enhance runtime performance by approx. 15% for certain applications. With features including intelligent network load balancing, fast failover and loss recovery, the AMD Pensando Pollara 400 AI NIC helps accelerate workloads while maximizing AI investments.  

Up to
10% Improved Network Reliability 1

Gain up to 10% improved network uptime. With the AMD Pensando Pollara 400 AI NIC, minimize cluster down-time while increasing network resilience and availability with state-of-the-art RAS and fast failure recovery.

Intelligent Network Monitoring and Load Balancing

Intelligent Packet Spray

Intelligent packet spray enables teams to seamlessly optimize network performance by enhancing load balancing, boosting overall efficiency, and scalability. Improved network performance can significantly reduce GPU-to-GPU communication times, leading to faster job completion and greater operational efficiency.

AI technology concept
Out-of-order Packet Handling and In-order Message Delivery

Help ensure messages are delivered in the correct order, even when employing multipathing and packet spraying techniques. The advanced out-of-order message delivery feature efficiently processes data packets that may arrive out of sequence, seamlessly placing them directly into GPU memory without the need for buffering.

Programming code abstract technology background of software developer and  Computer script
Selective Retransmission

Boost network performance with selective acknowledgment (SACK) retransmission, which helps ensure only dropped or corrupted packets are retransmitted. SACK efficiently detects and resends lost or damaged packets, optimizing bandwidth utilization, helping reduce latency during packet loss recovery, and minimizing redundant data transmission for exceptional efficiency.

Abstract illustration of a data stream
Path-Aware Congestion Control

Focus on workloads, not network monitoring, with real-time telemetry and network-aware algorithms. The path-aware congestion control feature simplifies network performance management, enabling teams to quickly detect and address critical issues while helping mitigate the impact of incast scenarios.

Abstract data center concept
Rapid Fault Detection 

With rapid fault detection, teams can pinpoint issues within milliseconds, enabling near-instantaneous failover recovery and helping significantly reduce GPU downtime. Tap into elevated network observability with near real-time latency metrics, congestion and drop statistics.

Digital cyberspace and digital data network connections

AMD Pensando™ Pollara 400 AI NIC Specifications

Maximum Bandwidth  Form Factor Ethernet Interface  Ethernet Speeds Ethernet Configurations  Management
Up to 400 Gbps Half-height, half-length  PCIe® Gen5.0x16; OCP® 3.0 25/50/100/200/400 Gbps

Supports up to 4 ports
- 1 x 400G
- 2 x 200G
- 4 x 100G
- 4 x 50G
- 4 x 25G

MCTP over SMBus

Explore the full suite of AMD networking solutions designed for high-performance modern data centers.

Resources

Unlock the Future of AI Networking

Learn how the AMD Pensando Pollara 400 AI NIC can transform your scale-out AI Infrastructure.

Footnotes
  1. PEN-016 - Testing conducted by AMD Performance Labs as of [28th April 2025] on the [AMD Pensando™ Pollara 400 AI NIC ], on a production system comprising of: 2 Nodes of 8xMI300X AMD GPUs (16 GPUs): Broadcom Tomahawk-4 based leaf switch (64x400G) from MICAS network; CLOS Topology; AMD Pensando Pollara AI NIC – 16 NICs; CPU Model in each of the 2 nodes - Dual socket 5th gen Intel® Xeon® 8568 - 48 core CPU with PCIe® Gen-5 BIOS version 1.3.6 ; Mitigation - Off (default)
    System profile setting - Performance (default) SMT- enabled (default); Operating System Ubuntu 22.04.5 LTS, Kernel 5.15.0-139-generic.
    Following operation were measured: Allreduce
    Average 25% for All-Reduce operations with 4QP and using UEC ready RDMA vs the RoCEv2 for multiple different message size samples (512MB, 1GB, 2GB, 4GB, 8GB, 16GB). The results are based on the average at least 8 test runs.
  2. Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Drive Approach. Claim reflects technology used in AMD Pensando Pollara 400 NICs, however testing and data not specific to the Pollara 400. Results may vary.
    Dong, Jianbo & Luo, Bin & Zhang, Jun & Zhang, Pengcheng & Feng, Fei & Zhu, Yikai & Liu, Ang & Chen, Zian & Shi, Yi & Jiao, Hairong & Lu, Gang & Guan, Yu & Zhai, Ennan & Xiao, Wencong & Zhao, Hanyu & Yuan, Man & Yang, Siran & Li, Xiang & Wang, Jiamang & Fu, Binzhang. (2024). Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Driven Approach. 10.48550/arXiv.2406.04594.Meta Research Paper, “The Llama 3 Herd of Models, Table 5. 
  3. Claim reflects technology used in AMD Pensando Pollara 400 NICs, however testing and data not specific to the Pollara 400. Results may vary.
    Dubey, Abhimanyu & Jauhri, Abhinav & Pandey, Abhinav & Kadian, Abhishek & Al-Dahle, Ahmad & Letman, Aiesha & Mathur, Akhil & Schelten, Alan & Yang, Amy & Fan, Angela & Goyal, Anirudh & Hartshorn, Anthony & Yang, Aobo & Mitra, Archi & Sravankumar, Archie & Korenev, Artem & Hinsvark, Arthur & Rao, Arun & Zhang, Aston & Zhao, Zhiwei. (2024). The Llama 3 Herd of Models. 10.48550/arXiv.2407.21783.
  4. Open Compute Project® and OCP® are registered trademarks of the Open Compute Project Foundation.