Introduction

CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches.

CP2K is written in Fortran 2008 and can be run efficiently in parallel using a combination of OpenMP multi-threading and MPI using AOCC and AOCL. The Spack framework and the instructions below provide a convenient way to build CP2K optimized for your platform and package versions.

Official website for CP2K : https://www.cp2k.org

 

Build CP2K using Spack

Please refer to this link for getting started with spack using AMD Zen Software Studio.

    # Example: For building CP2K 2023.1 with AOCC and AOCL
$ spack install cp2k@2023.1 %aocc ^amdfftw ^amdscalapack ^amdblis ^amdlibflame ^libint ^openmpi fabrics=cma,ucx

# Example: For building CP2K 2023.1 with ELPA, AOCC and AOCL
$ spack install cp2k@2023.1+elpa %aocc ^amdfftw ^amdscalapack ^amdblis ^amdlibflame ^elpa+openmp ^libint ^openmpi fabrics=cma,ucx

Explanation of the command options:

Symbol Meaning
%aocc Build CP2K with AOCC compiler.
+elpa Enable optimised diagonalisation routines from ELPA.
^amdfftw Use amdfftw as the FFTW implementation.
^amdscalapack Use amdscalapack as the SCALAPACK implementation.
^amdblis Use amdblis as the BLAS implementation.
^elpa+openmp Build with ELPA enabling OpenMP
^openmpi fabrics=cma,ucx Use OpenMPI as the MPI provider and use the CMA network for efficient intra-node communication, falling back to the UCX network fabric, if required. 
Note: It is advised to specifically set the appropriate fabric for the host system if possible. Refer to Open MPI with AMD Zen Software Studio for more guidance.

Running  CP2K

CP2K provides benchmark suite with source folder. The purpose of the CP2K benchmark suite is to provide performance which can be used to guide users towards the best configuration (e.g. machine, number of MPI processors, number of OpenMP threads) for a particular problem, and give a good estimation for the parallel performance of the code for different types of method. Five benchmarks are provided: H2O-64, Fayalite-FIST, LiH-HFX, H2O-DFT-LS and H2O-64-RI-MP2.

Runtime optimization: Process binding/pinning at runtime is observed to significantly affect CP2K performance both positively or negatively, depending on problem type and problem size.  AMD recommends the user test performance using different options, such as --bind-to core, --bind-to socket etc. or no binding at all.  For example, with the benchmark case H2O-dft-ls, not binding MPI processes to hardware has been observed to deliver the best performance.

Sample script for running CP2K with H2O-dft-ls.NREP4.inp from H2O-DFT-LS benchmark.

Run Script for AMD EPYC™ Processors

    #!/bin/bash
# Loading CP2K build with AOCC
spack load cp2k %aocc

# The dataset H2O-dft-ls.NREP4 may be downloaded (manually) from the location below
wget https://raw.githubusercontent.com/cp2k/cp2k/refs/heads/master/benchmarks/QS_DM_LS/H2O-dft-ls.NREP4.inp

# MPI and OMP settings
# MPI_RANKS=Number of cores available in the system.
MPI_RANKS=$(nproc)
export OMP_NUM_THREADS=1
MPI_OPTS="-np $MPI_RANKS --bind-to core --map-by core "

# To run the benchmark
mpirun $MPI_OPTS cp2k.psmp H2O-dft-ls.NREP4.inp

Note: The above build and run steps are tested with CP2K-2023.1, AOCC-5.0.0, AOCL-5.0.0, and OpenMPI-5.0.5 on Red Hat Enterprise Linux release 8.9 (Ootpa) using Spack v0.23.0.dev0 (commit id : 2da812cbad ).

For technical support on the tools, benchmarks and applications that AMD offers on this page and related inquiries, reach out to us at toolchainsupport@amd.com