GWDG Modules (gwdg-lmod)
The GWDG Modules are currently in testing and have to be loaded first:
source /sw/rev_profile/25.04/profile.sh
This step can be skipped after they become the default later in Q2 2025.
The default stack on the whole Unified HPC system in Göttingen.
This stack uses Lmod as its module system.
For the purposes of setting the desired software stack (see Software Stacks), its short name is gwdg-lmod
.
You can learn more about how to use the module system at Module Basics.
To see the available software, run
module avail
The modules for this stack are built for several combinations of CPU architecture and connection fabric to support the various kinds of nodes in the cluster.
The right module for the node is automatically selected during module load
.
Getting Started with gwdg-lmod
This software stack is enabled by default. Just login to glogin-p2.hpc.gwdg.de
, glogin-p3.hpc.gwdg.de
or glogin-gpu.hpc.gwdg.de
and use the module avail
, module spider
and module load
commands.
Below we have provided some example scripts that load the gromacs module and run a simple test case. You can copy the example script and adjust it to the modules you would like to use.
KISSKI and REACT users can take the Grete example and use --partition kisski
or --partition react
instead.
SCC users should use --partition scc-cpu
(CPU only) or --partition scc-gpu
(CPU+GPU) instead. If the microarchitecture on scc-cpu
is important it should be selected with --constraint cascadelake
(Emmy P2) or --constraint sapphirerapids
(Emmy P3). The type and number of GPUs on the scc-gpu
partition can be selected with the --gpus
option instead.
For more information on how to specify the right slurm partition and hardware constraints please check out Slurm and Compute Partitions.
The Cascade Lake nodes for SCC are currently still in the medium
partition, so please use --partition medium
instead of --partition scc-cpu --constraint cascadelake
. This will be changed in a future downtime when the medium
partition will be removed.
The appropriate login nodes for this phase are glogin-p2.hpc.gwdg.de
.
#!/bin/bash
#SBATCH --job-name="Emmy-P2-gromacs"
#SBATCH --output "slurm-%x-%j.out"
#SBATCH --error "slurm-%x-%j.err"
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 96
#SBATCH --partition standard96
#SBATCH --time 60:00
echo "================================ BATCH SCRIPT ================================" >&2
cat ${BASH_SOURCE[0]} >&2
echo "==============================================================================" >&2
module load gcc/14.2.0
module load openmpi/4.1.7
module load gromacs/2024.3
export OMP_NUM_THREADS=1
source $(which GMXRC)
mpirun gmx_mpi mdrun -s /sw/chem/gromacs/mpinat-benchmarks/benchPEP.tpr \
-nsteps 1000 -dlb yes -v
The appropriate login nodes for this phase are glogin-p3.hpc.gwdg.de
.
#!/bin/bash
#SBATCH --job-name="Emmy-P3-gromacs"
#SBATCH --output "slurm-%x-%j.out"
#SBATCH --error "slurm-%x-%j.err"
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 96
#SBATCH --partition medium96s
#SBATCH --time 60:00
echo "================================ BATCH SCRIPT ================================" >&2
cat ${BASH_SOURCE[0]} >&2
echo "==============================================================================" >&2
module load gcc/14.2.0
module load openmpi/4.1.7
module load gromacs/2024.3
export OMP_NUM_THREADS=1
source $(which GMXRC)
mpirun gmx_mpi mdrun -s /sw/chem/gromacs/mpinat-benchmarks/benchPEP.tpr \
-nsteps 1000 -dlb yes -v
The appropriate login nodes for this phase are glogin-gpu.hpc.gwdg.de
.
#!/bin/bash
#SBATCH --job-name="Grete-gromacs"
#SBATCH --output "slurm-%x-%j.out"
#SBATCH --error "slurm-%x-%j.err"
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 8
#SBATCH --gpus A100:4
#SBATCH --partition grete
#SBATCH --time 60:00
echo "================================ BATCH SCRIPT ================================" >&2
cat ${BASH_SOURCE[0]} >&2
echo "==============================================================================" >&2
module load gcc/13.2.0
module load openmpi/5.0.7
module load gromacs/2024.3
# OpenMP Threads * MPI Ranks = CPU Cores
export OMP_NUM_THREADS=8
export GMX_ENABLE_DIRECT_GPU_COMM=1
source $(which GMXRC)
mpirun gmx_mpi mdrun -s /sw/chem/gromacs/mpinat-benchmarks/benchPEP-h.tpr \
-nsteps 1000 -v -pme gpu -update gpu -bonded gpu -npme 1
The appropriate login nodes for this phase are glogin-gpu.hpc.gwdg.de
.
The microarchitecture on the login node (AMD Rome) does not match the microarchitecture on the compute nodes (Intel Sapphire Rapids).
In this case you should not compile your code on the login node, but use an interactive slurm job on the grete-h100
or grete-h100:shared
partitions.
#!/bin/bash
#SBATCH --job-name="Grete-H100-gromacs"
#SBATCH --output "slurm-%x-%j.out"
#SBATCH --error "slurm-%x-%j.err"
#SBATCH --nodes 1
#SBATCH --ntasks-per-node 8
#SBATCH --gpus H100:4
#SBATCH --partition grete-h100
#SBATCH --time 60:00
echo "================================ BATCH SCRIPT ================================" >&2
cat ${BASH_SOURCE[0]} >&2
echo "==============================================================================" >&2
module load gcc/13.2.0
module load openmpi/5.0.7
module load gromacs/2024.3
# OpenMP Threads * MPI Ranks = CPU Cores
export OMP_NUM_THREADS=12
export GMX_ENABLE_DIRECT_GPU_COMM=1
source $(which GMXRC)
mpirun gmx_mpi mdrun -s /sw/chem/gromacs/mpinat-benchmarks/benchPEP-h.tpr \
-nsteps 1000 -v -pme gpu -update gpu -bonded gpu -npme 1
Hierarchical Module System
The module system has a Core - Compiler - MPI hierarchy. If you want to compile your own software, please load the appropriate compiler first and then the appropriate MPI module. This will make the modules that were compiled using this combination visible: if you run module avail
you can see the additional modules at the top above the Core modules.
In previous revisions many more modules were visible in the Core group. To see a similar selection in the current software revision it should be enough to execute module load gcc openmpi
first to load the default versions of the GCC and Open MPI modules.
If you want to figure out how to load a particular module that is not currently visible with module avail
please use the module spider
command.
Supported Compiler - MPI Combinations for Release 25.04
CUDA 12 is not fully compatible with GCC 14 - this compiler is not available on Grete.
module load gcc/14.2.0
module load openmpi/4.1.7
module avail
Grete uses the older GCC 13 compiler to be compatible with CUDA.
module load gcc/13.2.0
module load openmpi/5.0.7
module avail
Do not use the generic compilers mpicc
, mpicxx
, mpifc
, mpigcc
, mpigxx
, mpif77
, and mpif90
!
The Intel MPI compilers are mpiicx
, mpiicpx
, and mpiifx
for C, C++, and Fortran respectively.
The classic compilers mpiicc
, mpiicpc
and mpiifort
were removed by Intel and are no longer available. It might be useful to set
export SLURM_CPU_BIND=none
when using Intel MPI.
module load intel-oneapi-compilers/2025.0.0
module load intel-oneapi-mpi/2021.14.0
module avail
OpenMPI will wrap around the modern Intel compilers icx
(C), icpx
(C++), and ifx
(Fortran).
module load intel-oneapi-compilers/2025.0.0
module load openmpi/4.1.7
module avail
Adding Your Own Modules
See Using Your Own Module Files.
Spack
Spack is provided as the spack
module to help build your own software.
Migrating from SCC Modules (scc-lmod)
Many modules that were previously available from “Core” now require loading a compiler and MPI module first. Please use module spider
to find these.
Many software packages that have a complicated tree of dependencies (including many python packages) have been moved into apptainer containers. Loading the appropriate module file will print a message that refers to the appropriate documentation page.