Quantum ESPRESSO
Note
The documentation here, while meant for the old hlrn-tmod software stack, still applies reasonably well to the same software packages in the other software stacks.
Description
Quantum ESPRESSO (QE) is an integrated suite of codes for electronic structure calculations and materials modeling at the nanoscale, based on DFT, plane waves, and pseudopotentials. QE is an open initiative, in collaboration with many groups world-wide, coordinated by the Quantum ESPRESSO Foundation.
Documentation and other material can be found on the QE website.
Prerequisites
QE is free software, released under the GNU General Public License (v2). Scientific work done using the QE code should contain citations of corresponding QE references.
Modules
The environment modules shown in the table below are available to include executables of the QE distribution in the directory search path. To see what is installed and what is the current default version of QE at HLRN, a corresponding overview can be obtained by saying module avail qe.
QE is a hybrid MPI/OpenMP parallel application. It is recommended to use mpirun as the job starter for QE at HLRN. An MPI module providing the mpirun command needs to be loaded ahead of the QE module.
| QE version | QE modulefile | QE requirements |
|---|---|---|
| 6.4.1 | qe/6.4.1 | impi/* (any version) |
The following versions are available via the unified GWDG Modules.
List of Modules
| Node Type | Module Names | Requirements (Load First) |
|---|---|---|
| Grete (GPU) | quantum-espresso/6.7 quantum-espresso/7.2 quantum-espresso/7.3.1 quantum-espresso/7.4 | gcc/13.2.0 openmpi/5.0.7 |
| Emmy (CPU) | quantum-espresso/6.7 | gcc/11.5.0 openmpi/4.1.7 |
| Emmy (CPU) | quantum-espresso/7.4 | gcc/14.2.0 openmpi/4.1.7 |
| Emmy (CPU) | quantum-espresso/7.4 | intel-oneapi-compilers/2025.0.0 intel-oneapi-mpi/2021.14.0 |
Job Script Examples
- For Intel Sapphire Rapids compute nodes – plain MPI case (no OpenMP threading) of a QE job using a total of 1152 CPU cores distributed over 12 nodes, 96 tasks each. Here 3 pools (nk=3) are created for k-point parallelization (384 tasks per k-point), 3D-FFT is performed using 8 task groups (48 tasks each, nt=8).
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p medium96s
#SBATCH -N 12
#SBATCH --tasks-per-node 96
module load gcc/14.2.0
module load openmpi/4.1.7
module load quantum-espresso/7.4
export OMP_NUM_THREADS=1
mpirun pw.x -nk 3 -nt 8 -i inputfile > outputfile