TURBOMOLE
TURBOMOLE is a computational chemistry program that implements various quantum chemistry methods (ab initio methods). It was initially developed at the University of Karlsruhe.
Description
TURBOMOLE features all standard methods as well as DFT code for molecules and solids, excited states and spectra using DFT or Coupled Cluster methods. Some of the programs can be used with MPI parallelisation.
Read more about it on the developer’s homepage.
An overview of the documentation can be found here.
The vendor also provides a list of utilities.
Prerequisites
Only members of the tmol
user group can use the TURBOMOLE software.
To have their user ID included in this group, users can send a message to their consultant or to NHR support.
Modules
Check the module listed under either Emmy Core modules or under the Grete Core modules.
Usage
Load the necessary modules. TURBOMOLE has two execution modes. By default it uses the SMP version (single node), but it can also run as MPI on multiple nodes on the cluster. To run the MPI version, the variable PARA_ARCH needs to be set to MPI. If it is empty, does not exist or set to SMP, it uses the SMP version.
Example for the MPI version
export PARA_ARCH=MPI
module load turbomole/7.8.1
TmoleX GUI
TmoleX is a GUI for TURBOMOLE that allows users to build a workflow. It also aids in the building of the initial structure and visualization of results.
To run the TmoleX GUI, you must connect using X11 forwarding (ssh -Y …).
module load turbomole/tmolex
TmoleX22
Alternatively, you can use our HPC Desktops via JupyterHub.
Job Script Examples
Note that some calculations run only in a certain execution mode; please consult the manual. Here all execution modes are listed.
- Serial version. The calculations run serial and run only on one node.
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 1
#SBATCH --mem-per-cpu=1.5G
module load turbomole
jobex -ri -c 300 > result.out
- SMP Version: It can only run on one node. Use one node and use all CPUs:
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 1
#SBATCH --cpus-per-task=96
export PARA_ARCH=SMP
module load turbomole
export PARNODES=$SLURM_CPUS_PER_TASK
jobex -ri -c 300 > result.out
- MPI version. The MPI binaries have a
_mpi
suffix. To use the same binary names as the SMP version, the path will be extended toTURBODIR/mpirun_scripts/
. This directory symlinks the binaries to the_mpi
binaries. Here we run it on 8 nodes with all 96 cores:
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 8
#SBATCH --tasks-per-node=96
export SLURM_CPU_BIND=none
export PARA_ARCH=MPI
module load turbomole
export PATH=$TURBODIR/mpirun_scripts/`sysname`/IMPI/bin:$TURBODIR/bin/`sysname`:$PATH
export PARNODES=${SLURM_NTASKS}
jobex -ri -c 300 > result.out
- Open MP Version, here we need to set the
OMP_NUM_THREADS
variable. Again, it uses 8 nodes with 96 cores. We use the standard binaries with Open MP. Do not use the mpi binaries. IfOMP_NUM_THREADS
is set, then it uses the Open MP version.
#!/bin/bash
#SBATCH -t 12:00:00
#SBATCH -p standard96
#SBATCH -N 8
#SBATCH --tasks-per-node=96
export SLURM_CPU_BIND=none
export PARA_ARCH=MPI
module load turbomole
export PARNODES=${SLURM_NTASKS}
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
jobex -ri -c 300 > result.out