Skip to content

LAMMPS

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a classical molecular dynamics code with a focus on materials modeling.

Keywords: materials science, physics, chemistry, molecular dynamics

Available modules

There are several LAMMPS modules on Midway2 and Midway3 that you can check via module avail lammps:

---------------------------- /software/modulefiles2 ----------------------------
lammps/17Nov2016+intelmpi-5.1+intel-16.0
lammps/23Jun2022+oneapi-2021  
---------------------------- /software/modulefiles -----------------------------
lammps/29Oct2020 
lammps/20Sep2021(default)
lammps/20Sep2021-gpu
lammps/24Mar2022
lammps/24Mar2022-gpu
lammps/21Nov2023

The gpu suffix indicates that this module support GPU acceleration and should run on a GPU node. You can then show the dependency of individual modules, for example, on Midway3 if you do

module show lammps/21Nov2023
you will get
-------------------------------------------------------------------
/software/modulefiles/lammps/21Nov2023:

module-whatis   {setup lammps 21Nov2023 compiled with the system compiler}
conflict        lammps
module          load intelmpi/2021.5+intel-2022.0 mkl/2022.0 cuda/11.5
prepend-path    PATH /software/lammps-21Nov2023-el8-x86_64/bin

Note

LAMMPS is under active development. You are encouraged to build the latest stable version from source code in your own space using the provided compilers.

In this case you can see this module was compiled with intelmpi/2021.5+intel-2022.0 and mkl/2022.0. After loading the module, you can run

lmp -h
to see the features that are supported by the executable lmp.

There are several LAMMPS binaries in the folder /software/lammps-21Nov2023-el8-x86_64/bin. The README file therein explains the difference between the binaries and how to build LAMMPS from source.

Example job scripts

An example batch script to run LAMMPS is given as below

!/bin/bash
#SBATCH --job-name=lmp-bench
#SBATCH --account=pi-[cnetid]
#SBATCH --time=01:00:00
#SBATCH --partition=caslake
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16

module load lammps/21Nov2023

cd $SLURM_SUBMIT_DIR

ntasks_per_node=$SLURM_NTASKS_PER_NODE
numnodes=$SLURM_JOB_NUM_NODES
n=$(( ntasks_per_node * numnodes ))

mpirun -np $n lmp_cpu -input in.txt

The following script illustrates how to run the LAMMPS binary built with the GPU package lmp_gpu

!/bin/bash
#SBATCH --job-name=lmp-bench
#SBATCH --account=pi-[cnetid]
#SBATCH --time=01:00:00
#SBATCH --partition=gpu
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
#SBATCH --gres=gpu:2
#SBATCH --constraint=v100

module load lammps/21Nov2023

cd $SLURM_SUBMIT_DIR

ntasks_per_node=$SLURM_NTASKS_PER_NODE
numnodes=$SLURM_JOB_NUM_NODES
n=$(( ntasks_per_node * numnodes ))

mpirun -np $n lmp_gpu -input in.txt -sf gpu -pk gpu 2

The following script illustrates how to run the LAMMPS binary built with the KOKKO package lmp_kokkos_cuda

!/bin/bash
#SBATCH --job-name=lmp-bench
#SBATCH --account=pi-[cnetid]
#SBATCH --time=01:00:00
#SBATCH --partition=gpu
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --gres=gpu:2
#SBATCH --constraint=v100

module load lammps/21Nov2023

cd $SLURM_SUBMIT_DIR

ntasks_per_node=$SLURM_NTASKS_PER_NODE
numnodes=$SLURM_JOB_NUM_NODES
n=$(( ntasks_per_node * numnodes ))

mpirun -np $n lmp_kokkos_cuda -input in.txt -k on g 2 -sf kk -pk kokkos neigh half newton on