OpenFOAM
An object-oriented Computational Fluid Dynamics(CFD) toolkit
Description
OpenFOAM core is an extensible framework written in C++, i.e. it provides an abstraction for a programmer to build their own code for an underlying mathematical model.
Prerequisites
OpenFOAM is a free, open source software which is released under the GNU-GPL license
Modules
The following versions of OpenFOAM are installed in Emmy system
OpenFOAM version | OpenFOAM module file | Requirements |
---|---|---|
v4 | openfoam/gcc.9/4 | gcc/9.2.0, openmpi/gcc.9/3.1.5 |
v5 | openfoam/gcc.9/5 | gcc/9.2.0, openmpi/gcc.9/3.1.5 |
v6 | openfoam/gcc.9/6 | gcc/9.2.0, openmpi/gcc.9/3.1.5 |
v7 | openfoam/gcc.9/7 | gcc/9.2.0, openmpi/gcc.9/3.1.5 |
v1912 | openfoam/gcc.9/v1912 | gcc/9.2.0, openmpi/gcc.9/3.1.5 |
v2112 | openfoam/gcc.9/v2112 | gcc/9.3.0, openmpi/gcc.9/* |
The module name is openfoam. Other versions may be installed. Inspect the output of:
module avail openfoam
Example Jobscripts
The next examples are derived from https://develop.openfoam.com/committees/hpc/-/wikis/HPC-motorbike. It utilizes two full nodes and has collated file I/O. All required input/case files can be downloaded here: motorbike_with_parallel_slurm_script.tar.gz.
#!/bin/bash
#SBATCH --time 1:00:00
#SBATCH --nodes 1
#SBATCH --tasks-per-node 96
#SBATCH -p standard96:test
#SBATCH --job-name=test_job
#SBATCH --output=ol-%x.%j.out
#SBATCH --error=ol-%x.%j.err
export I_MPI_FALLBACK=0
export I_MPI_DEBUG=6
export I_MPI_FABRICS=shm:ofi
export I_MPI_OFI_PROVIDER=psm2
export I_MPI_PMI_LIBRARY=libpmi.so
module load gcc/9.2.0
module load openmpi/gcc.9/3.1.5
module load openfoam/gcc.9/5
# initialize OpenFOAM environment
#---------------------
source $WM_PROJECT_DIR/etc/bashrc
source ${WM_PROJECT_DIR:?}/bin/tools/RunFunctions # provides fcts like runApplication
# set working directory
#---------------------
WORKDIR="$(pwd)"
# get and open example
#---------------------
cp -r $WM_PROJECT_DIR/tutorials/incompressible/icoFoam/cavity $WORKDIR/
cd cavity
# run script with several cases
#------------------------------
./Allrun
# run single case
#--------------------------
#cd cavity
#runApplication blockMesh
#icoFoam > icoFoam.log 2>&1
#!/bin/bash
#SBATCH --time 1:00:00
#SBATCH --nodes 2
#SBATCH --tasks-per-node 96
#SBATCH --partition standard96
#SBATCH --job-name foam_test_job
#SBATCH --output ol-%x.%j.out
#SBATCH --error ol-%x.%j.err
module load gcc/9.3.0 openmpi/gcc.9/3.1.5
module load openfoam/gcc.9/v2112
. $WM_PROJECT_DIR/etc/bashrc # initialize OpenFOAM environment
. $WM_PROJECT_DIR/bin/tools/RunFunctions # source OpenFOAM helper functions (wrappers)
tasks_per_node=${SLURM_TASKS_PER_NODE%\(*}
ntasks=$(($tasks_per_node*$SLURM_JOB_NUM_NODES))
foamDictionary -entry "numberOfSubdomains" -set "$ntasks" system/decomposeParDict # number of geometry fractions after decompositon will be number of tasks provided by slurm
date "+%T"
runApplication blockMesh # create coarse master mesh (here one block)
date "+%T"
runApplication decomposePar # decompose coarse master mesh over processors
mv log.decomposePar log.decomposePar_v0
date "+%T"
runParallel snappyHexMesh -overwrite # parallel: refine mesh for each processor (slow if large np) matching surface geometry (of the motorbike)
date "+%T"
runApplication reconstructParMesh -constant # reconstruct fine master mesh 1/2 (super slow if large np)
runApplication reconstructPar -constant # reconstruct fine master mesh 2/2
date "+%T"
rm -fr processor* # delete decomposed coarse master mesh
cp -r 0.org 0 # provide start field
date "+%T"
runApplication decomposePar # parallel: decompose fine master mesh and start field over processors
date "+%T"
runParallel potentialFoam # parallel: run potentialFoam
date "+%T"
runParallel simpleFoam # parallel: run simpleFoam
date "+%T"
Some important advice when running OpenFOAM on a supercomputer
Typically, OpenFOAM causes a lot of meta data operations. This default behavior jams no only your job but may slow down the shared parallel file system (=Lustre) for all other users. Also, your job is interrupted if the inode limit (number of files) of the quota system (hlrnquota) is exceeded.
If you can not use our local 2TB-SSDs at $LOCAL_TMPDIR
with #SBATCH --partition={standard,large,huge}96 -C ssd
please refer to our general advice to Optimize IO Performance.
To adapt/optimize your OpenFOAM job specifically for I/O operations on $WORK (=Lustre) we strongly recommend the following steps:
Always, to avoid that each processor writes in its own file please use collated file I/O. This feature was released 2017 for all OpenFOAM versions. [ESI www.openfoam.com/releases/openfoam-v1712/parallel.php] [Foundation www.openfoam.org/news/parallel-io]
OptimisationSwitches { fileHandler collated; // all processors share a file }
Always, set
runTimeModifiable false;
to reduce I/O activity. Only set “true” (default), if it is strictly necessary to re-read dictionaries (controlDict, …) each time step.
Possibly, do not save every time step: [www.openfoam.com/documentation/guides/latest/doc/guide-case-system-controldict.html] [www.cfd.direct/openfoam/user-guide/v6-controldict]
writeControl timeStep; writeInterval 100;
Possibly, save only the latest n time steps (overwrite older ones), such as:
purgeWrite 1000;
Typically, only a subset of variables is needed frequently (post-processing). The full set of variables can be saved less frequently (e.g., restart purposes). This can be achieved with [https://wiki.bwhpc.de/e/OpenFoam]:
writeControl clockTime; writeInterval 21600; // write ALL variables every 21600 seconds = 6 h functions { writeFields { type writeObjects; libs ("libutilityFunctionObjects.so"); objects ( T U // specified variables ); outputControl timeStep; writeInterval 100; // write specified variables every 100 steps } }
In case your HLRN run accidentally generated thousands of small files, please pack them (at least the small-size metadata files) into a single file afterwards:
tar -xvzf singlefile.tar.gz -C /folder/subfolder/location/
Thanks a lot for your contribution making HLRN a great place for all…
Compiling Own Code Over OpenFOAM
…
OpenFOAM Best Practices
…