Apptainer (formerly Singularity)
Description
Apptainer (formerly Singularity) is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization. One of the main uses of Apptainer is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world.
The need for reproducibility requires the ability to use containers to move applications from system to system.
Using Apptainer containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.
To learn more about Apptainer itself please consult the Apptainer documentation.
Module
Load the modulefile
$ module load apptainer
This provides access to the apptainer
executable wich can be used to run containers.
Building images and running containers
On NHR you can build apptainer images directly on the login nodes.
For SCC you should use a compute node. For example by starting an interactive job:
$ srun --partition int --pty bash
$ module load apptainer
If you have written a container recipe foo.def
you can create a foo.sif
apptainer image in the following way:
$ module load apptainer
$ apptainer build foo.sif foo.def
Example Jobscripts
Here is an example job of running the local Apptainer image (.sif)
#!/bin/bash
#SBATCH -p medium40
#SBATCH -N 1
#SBATCH -t 60:00
module load apptainer
apptainer run --bind /local,/user,/projects,$HOME,$WORK,$TMPDIR,$PROJECT $HOME/foo.sif
#!/bin/bash
#SBATCH -p grete:shared
#SBATCH -N 1
#SBATCH -G 1
#SBATCH -t 60:00
module load cuda
module load apptainer
apptainer run --nv --bind /local,/user,/projects,$HOME,$WORK,$TMPDIR,$PROJECT $HOME/foo.sif
#!/bin/bash
#SBATCH -p medium
#SBATCH -N 1
#SBATCH -c 8
#SBATCH -t 1:00:00
module load apptainer
apptainer run --bind /local,/user,/projects,/home,/scratch $HOME/foo.sif
Examples
Several examples of Apptainer usecases will be shown below.
Jupyter and IPython Parallel with Apptainer
As an example, we will pull and deploy the Apptainer image containing Jupyter and IPython Parallel.
Create a New Directory
First, create a new folder in your$HOME
directory. After that, navigate to this directory.Pull the Container Image
Pull a container image using public registries such as DockerHub or Apptainer Hub, or upload a locally built image. Here we will use a public image from Apptainer Hub,shub://A33a/sjupyter
.
Because Apptainer Hub builds images automatically, this might take some time. For a quicker option, consider building the container locally or loading it from DockerHub.To pull the image, use the following command:
apptainer pull --name sjupyter.sif shub://A33a/sjupyter
Submit the Job
Once thesjupyter.sif
image is ready, you can submit the corresponding job to interact with the container. To access a shell inside the container, run the following command:srun --pty -p int apptainer shell sjupyter.sif
Here, we request a shell to the container within the interactive partition.
Here’s the updated section for Apptainer:
GPU Access Within the Container
GPU devices are visible within the container by default. Only the necessary drivers and libraries should be installed or bound to the container. You can either install Nvidia drivers inside the container or bind them from the host system. To automatically bind the drivers, use the --nv
flag when running the container. For example:
apptainer shell --nv sjupyter.sif
If you want to use a specific version of the Nvidia driver, you can either install it within the container or link the existing driver version provided by the cluster. To make the drivers visible inside the container, add their location to the LD_LIBRARY_PATH
environment variable. For example, linking Nvidia driver version 384.111:
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/cm/local/apps/cuda-driver/libs/384.111/lib64
When running the container, bind the corresponding path using the -B
option:
apptainer shell -B /cm/local/apps jupyterCuda.sif
Libraries like CUDA and CuDNN should also be included in the LD_LIBRARY_PATH
variable:
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib64
In this example, CUDA v9.0 is installed within the container at /usr/local/cuda-9.0
.
If you want to use the nvidia-smi
command, add its location to the PATH
environment variable. In the cluster, it’s located at /cm/local/apps/cuda/libs/current/bin
:
export PATH=${PATH}:/cm/local/apps/cuda/libs/current/bin
Here’s an example of an Apptainer container bootstrap file, which can be used to build the container based on an Nvidia Docker image with preinstalled CUDA v9.0 and CuDNN v7 on Ubuntu 16.04. This example installs the GPU version of TensorFlow. The container uses the current Nvidia drivers installed in the cluster:
Bootstrap: docker
From: nvidia/cuda:9.0-cudnn7-runtime
%post
apt-get -y update
apt-get -y install python3-pip
pip3 install --upgrade pip
pip3 install tensorflow-gpu
%environment
PATH=${PATH}:${LSF_BINDIR}:/cm/local/apps/cuda/libs/current/bin
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib64:/cm/local/apps/cuda-driver/libs/current/lib64
CUDA_PATH=/usr/local/cuda-9.0
CUDA_ROOT=/usr/local/cuda-9.0
To shell into the container, use:
apptainer shell -B /cm/local/apps CONTAINERNAME.sif
Distributed PyTorch on GPU
In case if you are using PyTorch for ML, you may want to try out to run it in the container on our GPU nodes using its distributed package. Here is the link (PyTorch on the HPC) where you can find the complete documentation.