Skip to content

Containers

Advantages

Using containers for running your analysis has a couple of advantages:

  • Increased reproducibility
  • Portability: you can easily port your pipeline to other systems
  • Archiving: the container can be stored with your data

Apptainer / Singularity containers

Running apptainer containers

For running containerized applications as a user, the default container type on the cluster is Apptainer. This container type is specifically created for high performance computation in multi-user environments.

In the directory below are already a growing number of Apptainer containers available. If your group wants to have a Apptainer container added, please contact the system admins.

TBA

Note that not all paths on the server are automatically visible in the container, specifically the /data directory is not mounted. Use the -B <path-on-host>:<path-in-container> to mount any paths that are missing. When having problems with this, it is easiest to apptainer shell <container_name>.

To enable GPU support in Apptainer, you need to use the --nv flag when running or executing your container. This flag enables the NVIDIA GPU support built into Apptainer. For example:

apptainer run --nv <container>

Building apptainer containers

As a user you can create Apptainer containers yourself: either from a recipe, from a local Docker container, or from a Docker container on docker hub. If there is a Docker container already on docker hub, you can create your Apptainer container in minutes.

A specific note for building Apptainer containers, the /tmp directory is small, you may have to specify which directory Apptainer uses as temp directory by specifying APPTAINER_TMPDIR. You can set this environment variable in your .bashrc file, or specify it on the command line as in the following example. Make sure to create the directory you specify. The example below shows how to create an fmriprep singularity container from an image on Docker hub using a directory in my-scratch folder as temp directory.

export APPTAINER_TMPDIR=~/my-scratch/tmp
apptainer build fmriprep-21.0.1.sif docker://nipreps/fmriprep:21.0.1

If you received a docker archive file and want to convert it to an apptainer container, here is a quick example:

export APPTAINER_TMPDIR=~/my-scratch/tmp
apptainer build apptainer_container_name.sif docker-archive:name_of_docker.tar.gz

Cache management

Apptainer cache files can take a lot of disk space. If you download or build containers, they will be cached in you home folder under ~/.apptainer/cache. You can inspect and manage the cache using:

apptainer cache --help

Docker

Note that it is not possible to run Docker containers as a regular user because of security concerns. However, Red Hat (the linux flavour we use) support podman, which is a good alternative for running containers without admin permissions (=rootless mode)

Podman

selinux issues

Running podman containers on the server sometimes gives problems with selinux. In this case trying the extra run option --security-opt label=disable may solve the problem, as example running the container as follows:

podman run --security-opt label=disable -ti <containername> --help

See also this link.

Alternatively, you can add the :z or :Z suffix when mapping a file or folder. For more information, see this link.

User namespace

Another difficulty when running so called rootless containers, is that user and group IDs (as seen inside the container) are mapped to the so called user namespace. This is especially relevant when a specific (non-root) user is used for the containerized application.

Using GPU

Running podman containers with gpu is not well tested (if possible run apptainer containers).

Specify gpu=all to automatically select the right gpu:

podman run --rm --security-opt=label=disable    --device=nvidia.com/gpu=all    ubuntu nvidia-
or a tensorflow oneliner example:
podman run --rm --security-opt label=disable    --device=nvidia.com/gpu=all tensorflow/tensorflow:latest-gpu python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"