Apptainer (formerly Singularity)
Description
Apptainer (formerly Singularity) is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization. One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world.
The need for reproducibility requires the ability to use containers to move applications from system to system.
Using Apptainer containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.
To learn more about Apptainer itself please consult the Apptainer documentation.
Module
Load the modulefile
$ module load apptainer
This provides access to the apptainer
executable, which can be used to download, build and run containers.
Building Apptainer images and running them as containers
On NHR you can build Apptainer images directly on the login nodes.
For SCC you should use a compute node. For example by starting an interactive job:
$ srun --partition int --pty bash
$ module load apptainer
If you have written a container definition foo.def
you can create an Apptainer image foo.sif
(sif meaning Singularity Image File) in the following way:
$ module load apptainer
$ apptainer build foo.sif foo.def
For writing container definitions see the official documentation
Example Jobscripts
Here is an example job of running the local Apptainer image (.sif)
#!/bin/bash
#SBATCH -p medium40
#SBATCH -N 1
#SBATCH -t 60:00
module load apptainer
apptainer run --bind /local,/user,/projects,$HOME,$WORK,$TMPDIR,$PROJECT $HOME/foo.sif
#!/bin/bash
#SBATCH -p grete:shared
#SBATCH -N 1
#SBATCH -G 1
#SBATCH -t 60:00
module load cuda
module load apptainer
apptainer run --nv --bind /local,/user,/projects,$HOME,$WORK,$TMPDIR,$PROJECT $HOME/foo.sif
#!/bin/bash
#SBATCH -p medium
#SBATCH -N 1
#SBATCH -c 8
#SBATCH -t 1:00:00
module load apptainer
apptainer run --bind /local,/user,/projects,/home,/scratch $HOME/foo.sif
Examples
Several examples of Apptainer use cases will be shown below.
Jupyter with Apptainer
As an advanced example, you can pull and deploy the Apptainer image containing Jupyter.
Create a New Directory
Create a new folder in your$HOME
directory and navigate to this directory.Pull the Container Image
Pull a container image using public registries such as DockerHub. Here we will use a public image from quay.io,quay.io/jupyter/minimal-notebook
. For a quicker option, consider building the container locally or loading it from DockerHub.To pull the image, use the following command:
apptainer pull jupyter.sif docker://quay.io/jupyter/minimal-notebook
Don’t forget to run
module load apptainer
Submit the Job
Once thejupyter.sif
image is ready, you can submit the corresponding job to interact with the container. To access a shell inside the container, run the following command:srun --pty -p int apptainer shell jupyter.sif
This gives returns a shell where the software in the Apptainer image is available.
Accessing Jupyter Via the shell run
hostname
to get the name of the node that is running the container. Now start a Jupyter Notebook viajupyter notebook
, which will now expose a server running on port 8888 (default port) of the node. In order to access the notebook you need to port-forward the port the jupyter server to your local workstation. Open another shell on your local workstation and run the following SSH command:ssh -NL 8888:HOSTNAME:8888 -o ServerAliveInterval=60 -i YOUR_PRIVATE_KEY YOUR_HPC_USER@login-mdc.hpc.gwdg.de
Replace HOSTNAME with the value returned by
hostname
earlier, YOUR_PRIVATE_KEY with the path to your private ssh key used to access the HPC and YOUR_HPC_USER with your username on the HPC. If you are not using the SCC, adjust the target domain at the end as well. While your job is running, you can now access a Jupyter server on http://localhost:8888/
GPU Access Within the Container
See also GPU Usage
There are two ways, either by building an image with the NVIDIA libraries included or by building an image that mounts the NVIDIA libraries from the host system. Building an image with included libraries gives access to all available NVIDIA library versions but results in larger images and longer build times. Using the local NVIDIA libraries results in faster and smaller builds but limits the available versions to those installed on the system.
Example of an Apptainer image definition based on an NVIDIA Docker image including its libraries:
Bootstrap: docker
From: nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
%post
apt-get -y update
apt-get -y install python3-pip
pip3 install --upgrade pip
pip3 install torch --index-url https://download.pytorch.org/whl/cu118
Save this as cuda11.8.0.def then build and run the image via
module load apptainer
apptainer build cuda11.8.0.sif cuda11.8.0.def
srun --pty -p gpu-int apptainer shell --nv cuda11.8.0.sif
nvidia-smi
Alternatively to use the drivers available on the local system use the following example:
Bootstrap: docker
From: ubuntu:22.04
%post
apt-get -y update
apt-get -y install python3-pip
pip3 install --upgrade pip
pip3 install torch --index-url https://download.pytorch.org/whl/cu121
%environment
PATH=${PATH}:${LSF_BINDIR}:/opt/sw/rev/23.12/linux-scientific7-haswell/gcc-11.4.0/nvhpc-23.9-xliktd/Linux_x86_64/23.9/compilers/bin/
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/sw/rev/23.12/linux-scientific7-haswell/gcc-11.4.0/nvhpc-23.9-xliktd/Linux_x86_64/23.9/compilers/lib:/opt/sw/rev/23.12/linux-scientific7-haswell/gcc-11.4.0/nvhpc-23.9-xliktd/Linux_x86_64/23.9/cuda/lib64/
CUDA_PATH=/opt/sw/rev/23.12/linux-scientific7-cascadelake/gcc-11.4.0/cuda-12.1.1-s77vqs
CUDA_ROOT=/opt/sw/rev/23.12/linux-scientific7-cascadelake/gcc-11.4.0/cuda-12.1.1-s77vqs
Save this as minimal-cuda12.1.0.def then build and run the image via
module load apptainer
apptainer build minimal-cuda12.1.0.sif minimal-cuda12.1.0.def
srun --pty -p gpu-int /bin/bash
module load apptainer
module load cuda/12.1
apptainer shell --nv -B /opt/sw/rev/23.12/linux-scientific7-cascadelake/gcc-11.4.0 minimal-cuda12.1.0.sif
nvidia-smi
In both cases you can also verify that cuda works via PyTorch by running
python3
import torch
torch.cuda.is_available()
torch.cuda.device_count()
This should return true and a number greater than 0.