VASP

Description

The Vienna Ab initio Simulation Package (VASP) is a first-principles code for electronic structure calculations and molecular dynamics simulations in materials science and engineering. It is based on plane wave basis sets combined with the projector-augmented wave method or pseudopotentials. VASP is maintained by the Computational Materials Physics Group at the University of Vienna.

More information is available on the VASP website and from the VASP wiki.

Usage Conditions

Access to VASP executables is restricted to users satisfying the following criteria. The user must

  • be member of a research group owning a VASP license,
  • be registered in Vienna as a VASP user of this research group,
  • employ VASP only for work on projects of this research group.

Only members of the groups vasp5_2 or vasp6 have access to VASP executables. To have their user ID included in these groups, users can ask their consultant or submit a support request. It is recommended that users make sure that they already got registered in Vienna beforehand as this will be verified. Users whose research group did not upgrade its VASP license to version 6.x cannot become member of the vasp6 group.

Modules

VASP is an MPI-parallel application. We recommend to use mpirun as the job starter for VASP. The environment module providing the mpirun command associated with a particular VASP installation needs to be loaded ahead of the environment module for VASP.

VASP VersionUser GroupVASP ModulefileMPI RequirementCPU / GPULise / Emmy
5.4.4 with patch 16052018vasp5_2vasp/5.4.4.p1impi/2019.5✅ / ❌✅ / ✅
6.4.1vasp6vasp/6.4.1impi/2021.7.1✅ / ❌✅ / ❌
6.4.1vasp6vasp/6.4.1nvhpc-hpcx/23.1❌ / ✅✅ / ❌
6.4.2vasp6vasp/6.4.2impi/2021.7.1✅ / ❌✅ / ❌

N.B.: VASP version 6.x has been compiled with support for OpenMP, HDF5, and Wannier90. The CPU versions additionally supports Libxc, and the version 6.4.2 includes the DFTD4 van-der-Waals functional as well.

Executables

Our installations of VASP comprise the regular executables (vasp_std, vasp_gam, vasp_ncl) and, optionally, community driven modifications to VASP as shown in the table below. They are available in the directory added to the PATH environment variable by one of the vasp environment modules.

ExecutableDescription
vasp_stdmultiple k-points (formerly vasp_cd)
vasp_gamGamma-point only (formerly vasp_gamma_cd)
vasp_nclnon-collinear calculations, spin-orbit coupling (formerly vasp)
vaspsol_[stdgam
vasptst_[stdgam
vasptstsol_[stdgam

N.B.: The VTST script collection is not available from the vasp environment modules. Instead, it is provided by the vtstscripts environment module(s).

Example Jobscripts

#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --nodes 2
#SBATCH --tasks-per-node 40
 
export SLURM_CPU_BIND=none
 
module load impi/2019.5
module load vasp/5.4.4.p1
 
mpirun vasp_std
#!/bin/bash
#SBATCH --time 12:00:00
#SBATCH --nodes 2
#SBATCH --tasks-per-node 96
 
export SLURM_CPU_BIND=none
 
module load impi/2019.5
module load vasp/5.4.4.p1
 
mpirun vasp_std

The following job script exemplifies how to run vasp 6.4.1 making use of OpenMP threads. Here, we have 2 OpenMP threads and 48 MPI tasks per node (the product of these 2 numbers should ideally be equal to the number of CPU cores per node).

In many cases, running VASP with parallelization over MPI alone already yields good performance. However, certain application cases can benefit from hybrid parallelization over MPI and OpenMP. A detailed discussion is found here. If you opt for hybrid parallelization, please pay attention to process pinning, as shown in the example below.

#!/bin/bash
#SBATCH --time=12:00:00
#SBATCH --nodes=2
#SBATCH --tasks-per-node=48
#SBATCH --cpus-per-task=2
#SBATCH --partition=standard96
 
export SLURM_CPU_BIND=none
 
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
 
# Adjust the maximum stack size of OpenMP threads
export OMP_STACKSIZE=512m
 
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
 
# Binding MPI tasks
export I_MPI_PIN=yes
export I_MPI_PIN_DOMAIN=omp
export I_MPI_PIN_CELL=core
 
module load impi/2021.7.1
module load vasp/6.4.1  
 
mpirun vasp_std

In the following example, we show a job script that will run on the Nvidia A100 GPU nodes (Berlin). Per default, VASP will use one GPU per MPI task. If you plan to use 4 GPUs per node, you need to set 4 MPI tasks per node. Then, set the number of OpenMP threads to 18 to speed up your calculation. This, however, also requires proper process pinning.

#!/bin/bash
#SBATCH --time=12:00:00
#SBATCH --nodes=2
#SBATCH --tasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --partition=gpu-a100
 
# Set the number of OpenMP threads as given by the SLURM parameter "cpus-per-task"
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
 
# Binding OpenMP threads
export OMP_PLACES=cores
export OMP_PROC_BIND=close
 
# Avoid hcoll as MPI collective algorithm
export OMPI_MCA_coll="^hcoll"
 
# You may need to adjust this limit, depending on the case
export OMP_STACKSIZE=512m
 
module load nvhpc-hpcx/23.1
module load vasp/6.4.1 
 
# Carefully adjust ppr:2, if you don't use 4 MPI processes per node
mpirun --bind-to core --map-by ppr:2:socket:PE=${SLURM_CPUS_PER_TASK} vasp_std