Introduction

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code. LAMMPS can be used to simulate solid-state materials (metals, semiconductors), soft matter (biomolecules, polymers), and coarse-grained or mesoscopic systems. LAMMPS runs on single processors or in parallel using message-passing techniques with a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.

Official website for LAMMPS: https://www.lammps.org

Build LAMMPS using Spack

Please refer to this link for getting started with spack using AMD Zen Software Studio.

    # Example for building LAMMPS with AOCC and AOCL.
$ spack install lammps %aocc +intel ~kim +asphere +class2 +kspace +manybody +molecule +extra-dump +opt +replica +rigid +granular +openmp-package +openmp ^amdfftw ^openmpi fabrics=cma,ucx

Explanation of the command options:

Symbol Meaning
%aocc Build LAMMPS with AOCC compiler.
^amdfftw Use amdfftw as the FFTW implementation.
+asphere ,+class2, +kspace,+manybody +molecule +extra-dump +opt +replica +rigid +granular  LAMMPS-specific packages (Please add it as per user requirement).
+intel Build LAMMPS with Intel package for enabling performance improvement with vectorization support for single, mix, and double precision on CPU and accelerators. Details of INTEL package can be accessed at  "https://docs.lammps.org/Speed_intel.html#intel-package".
This option is compatible with AOCC 4.0+ and LAMMPS 20220324+.
+openmp-package Build LAMMPS with omp package, It provides optimized and multi-threaded versions of many pair styles, nearly all bonded styles (bond, angle, dihedral, improper), several Kspace styles, and a few fix styles. Details of openmp-package can be accessed at "https://docs.lammps.org/Speed_omp.html#openmp-package".
+openmp Build LAMMPS with OpenMP support enabled.
^openmpi fabrics=cma,ucx Use OpenMPI as the MPI provider and use the CMA network for efficient intra-node communication, falling back to the UCX network fabric, if required.
Note: It is advised to specifically set the appropriate fabric for the host system if possible. Refer to Open MPI with AMD Zen Software Studio for more guidance.

Running LAMMPS

While LAMMPS can be used for a big variety of workloads. Here we have added the steps to download and run the sample data sets available with LAMMPS source code.

Run Script for AMD EPYC™ Processors

    #!/bin/bash
# Load LAMMPS build with AOCC
spack load lammps %aocc

# Obtaining Benchmarks
# Download the Rhodopsin dataset.
wget https://raw.githubusercontent.com/lammps/lammps/develop/bench/in.rhodo.scaled
wget https://raw.githubusercontent.com/lammps/lammps/develop/bench/data.rhodo

# MPI and OMP settings
# MPI_RANKS=Number of cores available in the system.
MPI_RANKS=$(nproc)
export OMP_NUM_THREADS=1
MPI_OPTS=”-np $MPI_RANKS --map-by core --bind-to core”

# To run the benchmark with intel package
mpirun $MPI_OPTS lmp -var x 8 -var y 8 -var z 8 -in in.rhodo.scaled -sf intel -pk intel 0
## ***(-pk intel 0 indicates to run with intel package on CPU)
## ***(-sf intel switch will automatically append “intel” to styles that support it)

# To run the benchmark With OMP package
mpirun $MPI_OPTS lmp -var x 8 -var y 8 -var z 8 -in in.rhodo.scaled -sf omp -pk omp 1
## ***(-pk omp 1 indicates to run with OMP package with 1 openmp thread)
## ***(-sf omp command-line switch, which will automatically append “omp” to styles that support it)

Note: The above build and run steps are tested with lammps- 29 Aug 2024, AOCC-5.0.0, AOCL-5.0.0, and OpenMPI-5.0.5 on Red Hat Enterprise Linux release 8.9 (Ootpa) using Spack v0.23.0.dev0 (commit id : a00fddef4e ).

For technical support on the tools, benchmarks and applications that AMD offers on this page and related inquiries, reach out to us at toolchainsupport@amd.com