OpenMPI

OpenMPI is a widely uses MPI library with good performance over shared memory and all fabrics present on the clusters. There are two variants, the offical variant from OpenMPI and the one from the Nvidia HPC SDK. The Nvidia HPC SDK variant is always built and optimized to support Nvidia GPUs, Nvidia NV-Link (used between the GPUs on some nodes), and Mellanox Infiniband. The offial variant is built to support Nvidia GPUs in the GWDG Modules (gwdg-lmod) and NHR Modules (nhr-lmod) software stacks on the Grete nodes.

Warning

Do not mix up OpenMPI (an implementation of MPI) and OpenMP, which is a completely separate parallelization technology (it is even possible to use both at the same time). They just coincidentally have names that are really similar.

In all software stacks, the official variant’s module name is openmpi. The Nvidia HPC SDK variant’s module name is nvhpc.

To load OpenMPI, follow the instructions below.

Load OpenMPI:

For a specific version, run

module load openmpi/VERSION

and for the default version, run

module load openmpi

Some software might need extra help to find the OpenMPI installation in a non-standard location even after you have loaded the module. For example the python framework Neuron:

export MPI_LIB_NRN_PATH="${OPENMPI_MODULE_INSTALL_PREFIX}/lib/libmpi.so"

If the software does not have any method to specify the location of the MPI installation and cannot find libmpi.so, you can use LD_LIBRARY_PATH:

export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${OPENMPI_MODULE_INSTALL_PREFIX}/lib"

Please set this variable only when absolutely necessary to avoid breaking the linkage of other applications.

For a specific version, run

module load nvhpc/VERSION

and for the default version, run

module load nvhpc