Article Number: RN-RADEONPROSSG-LINUX

This document describes how to set up the Radeon PRO SSG Card for the first time.

Driver Installation

Detailed installation and configuration instructions can be found here:

Ubuntu                :   https://www.amd.com/en/support/kb/faq/gpu-635
RHEL / CentOS   :  https://www.amd.com/en/support/kb/faq/gpu-637

Enabling Direct GMA support for SSG

Some flags also need to be passed to the AMDGPU-PRO driver on boot.  The syntax of this module parameter is amdgpu.ssg=1 amdgpu.direct_gma_size=X, where X is the amount of MBs to be used, to a maximum of 96MB. 

Direct GMA support for SSG is only available under Ubuntu 16.04, RHEL 7.3 and CentOS 7.3 for now.

Ubuntu 16.04 LTS

  • Edit /etc/default/grub as root and modify GRUB_CMDLINE_LINUX_DEFAULT in order to add "amdgpu.ssg=1 amdgpu.direct_gma_size=96" (without the quotes). The line may look something like this after the change: 
    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdgpu.ssg=1 amdgpu.direct_gma_size=96"
  • Modifying /etc/default/grub for SSG will add support globally, for all kernels.  Adding individual support, per kernel, can be done by editing /boot/grub/grub.cfg instead.
  • Update grub and reboot as root:
    update-grub;reboot


RHEL 7.3

  • Edit /etc/default/grub as root and modify GRUB_CMDLINE_LINUX in order to add "amdgpu.ssg=1 amdgpu.direct_gma_size=96" (without the quotes). The line may look something like this after the change: 
    • GRUB_CMDLINE_LINUX=" crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb splash amdgpu.ssg=1 amdgpu.direct_gma_size=96"
    • Update grub and reboot as root – to do that, find grub.cfg in your system, e.g. 
      On UEFI-based machines: $ sudo grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg; 
      On BIOS-based machines: $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg; 
      reboot

CentOS 7.3

  • Edit /etc/default/grub as root and modify GRUB_CMDLINE_LINUX in order to add "amdgpu.ssg=1 amdgpu.direct_gma_size=96" (without the quotes). The line may look something like this after the change: 
    GRUB_CMDLINE_LINUX=" crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb splash amdgpu.ssg=1 amdgpu.direct_gma_size=96"
  • Update grub and reboot as root – to do that, find grub.cfg in your system, e.g. 
    On UEFI-based machines: $ sudo grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg; 
    On BIOS-based machines: $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg; 
    reboot


Modifying /etc/default/grub will be global, for all kernels.  This can also be done individually per kernel by appending it to the boot command in /boot/grub/grub.cfg

Upon restarting, you can check to see if the above flags were included.  

$ cat /proc/cmdline

If everything was set correctly, the flags should appear at the end of the command line.


First time NVME RAID0 Setup

Though NVME support has been in older kernels NVME peer-to-peer support is only available in newer kernels.  Check CONFIG_NVME_TARGET is set in the boot configuration file.

For best performance we recommend setting up all 4 drives in RAID 0 configuration.  It should be noted though that RAID 0 provides no redundancy and if something goes wrong with one drive, the data for all drives will become invalid.

Steps 1 to 7 are needed only once to setup for the first time.

  1. Check if the NVMe devices are detected properly. It should have 4 devices based on Radeon Pro SSG card.
    $ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT

    NAME                   SIZE        FSTYPE TYPE      MOUNTPOINT
    nvme0n1             477G                      disk
    nvme1n1             477G                      disk
    nvme2n1             477G                      disk
    nvme3n1             477G                      disk

  2. Install mdadm utility.

    Ubuntu 16.04:
    $ sudo apt-get install mdadm

    RHEL7.3/CentOS7.3:
    $ sudo yum install mdadm
     
  3. Reset the devices back to normal by zeroing their superblock.

    $ sudo mdadm --zero-superblock /dev/nvme0n1

    $ sudo mdadm --zero-superblock /dev/nvme1n1
    .
    .
    .
    .
    repeat for all the devices in your machine
     
  4. Create RAID 0 array with the available devices.

    >$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
     
  5. Check RAID was successfully created.

    $ cat /proc/mdstat

    Ubuntu:

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid0 nvme2n1[2] nvme3n1[3] nvme1n1[1] nvme0n1[0]

    1999904768 blocks super 1.2 512k chunks

     
    RHEL 7.3:

    Personalities : [raid0]

    md127 : active raid0 nvme0n1p1[0] nvme1n1p1[1] nvme3n1p1[3]         nvme2n1p1[2]

    1999904768 blocks super 1.2 512k chunks
     
  6. Create a filesystem. The ext4 filesystem 

    $ sudo mkfs.ext4 -F /dev/md0
     
  7. Mount the filesystem. 

    $ sudo mkdir -p /media/PROSSG

    $ sudo mount /dev/md0 /media/PROSSG


*Note: The steps above are an example only.  For more detailed RAID set-up information, the formal Linux documentation should be consulted in detail, should different configuration and mount points be required.

To have the raid array assembled automatically after reboot save the layout in mdadm.conf and insert mount point in /etc/fstab

Ubuntu 16.04:

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

$ sudo update-initramfs –u


RHEL7.3/CentOS7.3:

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

$ sudo dracut --regenerate-all --force


*Note: Some users were having issues with mdadm not remembering the arrays after rebooting, this can be fixed by changing the partition