AMD Zen Software Studio with Spack
- AMD Optimizing C/C++ Compiler (AOCC)
- AMD Optimizing CPU Libraries (AOCL)
- AMD uProf
- Setting Preference for AMD Zen Software Studio
Open MPI with AMD Zen Software Studio
Micro Benchmarks/Synthetic Benchmarks
Spack HPC Applications
Introduction
The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
Official Website: https://www.open-mpi.org/
Build Open MPI using Spack
Please refer to this link for getting started with spack using AMD Zen Software Studio
# Building Open MPI
$ spack install openmpi %aocc fabrics=cma,ucx
Explanation of the command options:
Symbol | Meaning |
---|---|
%aocc | Use AOCC as compiler |
fabrics= |
Low-level communication mechanisms to use. The optimal choice will be partly dependent on the high-performance network hardware in use, but optimal choices will improve intra-node communications also. For AMD EPYC-based compute nodes, we recommend the cma (cross memory attach) fabric for shared memory transfer within nodes. The ucx fabric is a reasonable choice for most network hardware, but optimal performance will likely be obtained using the solution recommended by your network hardware vendor. For more information, see: https://docs.open-mpi.org/en/v5.0.x/release-notes/networks.html and the output of spack info openmpi |