The Open Containers Initiative (OCI) container format, which grew out of Docker, is the dominant standard for cloud-focused containerized deployments of software. Although {Singularity}'s own container format has many unique advantages, it's likely you will need to work with Docker/OCI containers at some point.
{Singularity}'s default native mode aims for maximum compatibility with Docker, within the constraints on a runtime that is well suited for use on shared systems and especially in HPC environments that often employ older LTS Linux distributions.
{Singularity}'s optional OCI-mode (--oci
) uses a true OCI low-level runtime,
and encapsulates OCI images into an OCI-SIF file, rather than converting them to
{Singularity}'s own container format. It offers improved OCI compatibility but
has additional requirements. Please :ref:`consult the documentation <oci_mode>`
about OCI-mode for more information.
Using {Singularity} you can:
- Pull, run, and build from most containers on Docker Hub, without changes.
- Pull, run, and build from containers hosted on other registries, including private registries deployed on premise, or in the cloud.
- Pull and build from OCI containers in archive formats, or cached in a local Docker daemon.
This section will highlight these workflows, and discuss the limitations and best practices to keep in mind when creating containers targeting both Docker and {Singularity}.
Docker Hub is the most common place that projects publish public container images. At some point, it's likely that you will want to run or build from containers that are hosted there.
It's easy to run a public Docker Hub container with {Singularity}. Just
put docker://
in front of the container repository and tag. To run
the container that's called sylabsio/lolcow:latest
:
$ singularity run docker://sylabsio/lolcow:latest INFO: Converting OCI blobs to SIF format INFO: Starting build... Getting image source signatures Copying blob 16ec32c2132b done Copying blob 5ca731fc36c2 done Copying config fd0daa4d89 done Writing manifest to image destination Storing signatures 2021/10/04 14:50:21 info unpack layer: sha256:16ec32c2132b43494832a05f2b02f7a822479f8250c173d0ab27b3de78b2f058 2021/10/04 14:50:23 info unpack layer: sha256:5ca731fc36c28789c5ddc3216563e8bfca2ab3ea10347e07554ebba1c953242e INFO: Creating SIF file... _____________________________ < Mon Oct 4 14:50:30 CDT 2021 > ----------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
Note that {Singularity} retrieves blobs and configuration data from Docker Hub, extracts the layers that make up the Docker container, and creates a SIF file from them. This SIF file is kept in your {Singularity} :ref:`cache directory <sec:cache>`, so if you run the same Docker container again the downloads and conversion aren't required.
To obtain the Docker container as a SIF file in a specific location,
which you can move, share, and keep for later, singularity pull
it:
$ singularity pull docker://sylabsio/lolcow INFO: Using cached SIF image $ ls -l lolcow_latest.sif -rwxr-xr-x 1 myuser myuser 74993664 Oct 4 14:55 lolcow_latest.sif
If it's the first time you pull the container it'll be downloaded and translated. If you have pulled the container before, it will be copied from the cache.
Note
singularity pull
of a Docker container actually runs a
singularity build
behind the scenes, since we are translating
from OCI to SIF. If you singularity pull
a Docker container
twice, the output file isn't identical because metadata such as dates
from the conversion will vary. This differs from pulling a SIF
container (e.g. from a library://
URI), which always give you an
exact copy of the image.
To use {Singularity} 4's new OCI-mode, add the --oci
option when you run /
shell / exec
or pull
an OCI container:
$ singularity run --oci docker://sylabsio/lolcow:latest INFO: Converting OCI image to OCI-SIF format INFO: Squashing image to single layer INFO: Writing OCI-SIF image INFO: Cleaning up. _____________________________ < Tue Sep 5 10:36:58 UTC 2023 > ----------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
Note that in this case, the log messages show that {Singularity} is converting the image to OCI-SIF format. This is closer to the original OCI image than a SIF created in native (non-OCI) mode. You can read more in the :ref:`OCI-SIF section <oci_sif>` of this documentation.
When you pull
an image with --oci
, the OCI-SIF file will have an
.oci.sif
extension by default:
$ singularity pull --oci docker://sylabsio/lolcow INFO: Using cached OCI-SIF image $ ls -l lolcow_latest.oci.sif -rwxr-xr-x. 1 myuser myuser 74728057 Sep 5 11:39 lolcow_latest.oci.sif
Docker Hub introduced limits on anonymous access to its API in November
2020. Every time you use a docker://
URI to run, pull etc. a
container {Singularity} will make requests to Docker Hub in order to
check whether the container has been modified there. On shared systems,
and when running containers in parallel, this can quickly exhaust the
Docker Hub API limits.
We recommend that you singularity pull
a Docker image to a local
SIF, and then always run from the SIF file, rather than using
singularity run docker://...
repeatedly.
Alternatively, if you have signed up for a Docker Hub account, make sure
that you authenticate before using docker://
container URIs.
To make use of the API limits under a Docker Hub account, or to access private containers, you'll need to authenticate to Docker Hub. There are a number of ways to do this with {Singularity}.
The singularity registry login
command supports logging into Docker
Hub and other OCI registries. For Docker Hub, the registry hostname is
docker.io
, so you will need to login as below, specifying your
username:
$ singularity registry login --username myuser docker://docker.io Password / Token: INFO: Token stored in /home/myuser/.singularity/docker-config.json
The Password / Token you enter must be a Docker Hub CLI access token, which you should generate in the 'Security' section of your account profile page on Docker Hub.
To check which Docker / OCI registries you are currently logged in to,
use singularity registry list
.
To logout of a registry, so that your credentials are forgotten, use
singularity registry logout
:
$ singularity registry logout docker://docker.io INFO: Logout succeeded
For more information on singularity registry
and its subcommands, including
the --authfile
flag for storing and using credentials in user-specified
files, see :ref:`the documentation of the registry command <registry>` itself.
If you have the docker
CLI installed on your machine, you can
docker login
to your account. This stores authentication information
in ~/.docker/config.json
. The process that {Singularity} uses to
retrieve Docker / OCI containers will attempt to use this information to
login.
Note
{Singularity} can only read credentials stored directly in
~/.docker/config.json
. It cannot read credentials from external
Docker credential helpers.
To perform a one-off interactive login, which will not store your
credentials, use the --docker-login
flag:
$ singularity pull --docker-login docker://sylabsio/private Enter Docker Username: myuser Enter Docker Password:
When calling {Singularity} in a CI/CD workflow, or other non-interactive scenario, it may be useful to specify Docker Hub login credentials using environment variables. These are often the default way of passing secrets into jobs within CI pipelines.
Singularity accepts a username, and password / token, as
SINGULARITY_DOCKER_USERNAME
and SINGULARITY_DOCKER_PASSWORD
respectively. These environment variables will override any stored
credentials.
If DOCKER_USERNAME
and DOCKER_PASSWORD
, without the SINGULARITY_
prefix, are set they will also be used provided the SINGULARITY_
equivalent
is not overriding them. This allows a single set of environment variables to be
set for both singularity
and docker
operations.
$ export SINGULARITY_DOCKER_USERNAME=myuser $ export SINGULARITY_DOCKER_PASSWORD=mytoken $ singularity pull docker://sylabsio/private
You can use docker://
URIs with {Singularity} to pull and run
containers from OCI registries other than Docker Hub. To do this, you'll
need to include the hostname or IP address of the registry in your
docker://
URI. Authentication with other registries is carried out
in the same basic manner, but sometimes you'll need to retrieve your
credentials using a specific tool, especially when working with Cloud
Service Provider environments.
Below are specific examples for some common registries. Most other registries follow a similar pattern for pulling public images, and authenticating to access private images.
Quay is an OCI container registry used by a large number of projects,
and hosted at https://quay.io
. To pull public containers from Quay,
just include the quay.io
hostname in your docker://
URI:
$ singularity pull docker://quay.io/bitnami/python:3.7 INFO: Converting OCI blobs to SIF format INFO: Starting build... ... $ singularity run python_3.7.sif Python 3.7.12 (default, Sep 24 2021, 11:48:27) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>>
To pull containers from private repositories you will need to generate a CLI token in the Quay web interface, then use it to login with {Singularity}. Use the same methods as described for Docker Hub above:
- Run
singularity registry login --username myuser docker://quay.io
to store your credentials for {Singularity}. - Use
docker login quay.io
ifdocker
is on your machine. - Use the
--docker-login
flag for a one-time interactive login. - Set the
SINGULARITY_DOCKER_USERNAME
andSINGULARITY_DOCKER_PASSWORD
environment variables.
The NVIDIA NGC catalog at https://ngc.nvidia.com contains various GPU software, packaged in containers. Many of these containers are specifically documented by NVIDIA as supported by {Singularity}, with instructions available.
Previously, an account and API token was required to pull NGC containers. However, they are now available to pull as a guest without login:
$ singularity pull docker://nvcr.io/nvidia/pytorch:21.09-py3 INFO: Converting OCI blobs to SIF format INFO: Starting build...
If you do need to pull containers using an NVIDIA account, e.g. if you have access to an NGC Private Registry, you will need to generate an API key in the web interface in order to authenticate.
Use one of the following authentication methods (detailed above for
Docker Hub), with the username $oauthtoken
and the password set to
your NGC API key.
- Run
singularity registry login --username \$oauthtoken docker://nvcr.io
to store your credentials for {Singularity}. - Use
docker login nvcr.io
ifdocker
is on your machine. - Use the
--docker-login
flag for a one-time interactive login. - Set the
SINGULARITY_DOCKER_USERNAME="\$oauthtoken"
andSINGULARITY_DOCKER_PASSWORD
environment variables.
See also: https://docs.nvidia.com/ngc/ngc-private-registry-user-guide/index.html
GitHub Container Registry is increasingly used to provide Docker
containers alongside the source code of hosted projects. You can pull a
public container from GitHub Container Registry using a ghcr.io
URI:
$ singularity pull docker://ghcr.io/containerd/alpine:latest INFO: Converting OCI blobs to SIF format INFO: Starting build...
To pull private containers from GHCR you will need to generate a personal access token in the GitHub web interface in order to authenticate. This token must have required scopes. See the GitHub documentation here.
Use one of the following authentication methods (detailed above for Docker Hub), with your username and personal access token:
- Run
singularity registry login --username myuser docker://ghcr.io
to store your credentials for {Singularity}. - Use
docker login ghcr.io
ifdocker
is on your machine. - Use the
--docker-login
flag for a one-time interactive login. - Set the
SINGULARITY_DOCKER_USERNAME
andSINGULARITY_DOCKER_PASSWORD
environment variables.
To work with an AWS hosted Elastic Container Registry (ECR) generally requires authentication. There are various ways to generate credentials. You should follow one of the approaches in the ECR guide in order to obtain a username and password.
Warning
The ECR Docker credential helper cannot be used, as {Singularity}
does not currently support external credential helpers used with
Docker, only reading credentials stored directly in the
.docker/config.json
file.
The get-login-password
approach is the most straightforward. It uses
the AWS CLI to request a password, which can then be used to
authenticate to an ECR private registry in the specified region. The
username used in conjunction with this password is always AWS
.
$ aws ecr get-login-password --region region
Then login using one of the following methods:
- Run
singularity registry login --username AWS docker://<accountid>.dkr.ecr.<region>.amazonaws.com
to store your credentials for {Singularity}. - Use
docker login --username AWS <accountid>.dkr.ecr.<region>.amazonaws.com
ifdocker
is on your machine. - Use the
--docker-login
flag for a one-time interactive login. - Set the
SINGULARITY_DOCKER_USERNAME=AWS
andSINGULARITY_DOCKER_PASSWORD
environment variables.
You should now be able to pull containers from your ECR URI at
docker://<accountid>.dkr.ecr.<region>.amazonaws.com
.
An Azure hosted Azure Container Registry (ACR) will generally hold private images and require authentication to pull from. There are several ways to authenticate to ACR, depending on the account type you use in Azure. See the ACR documentation for more information on these options.
Generally, for identities, using az acr login
from the Azure CLI
will add credentials to .docker/config.json
which can be read by
{Singularity}.
Service Principle accounts will have an explicit username and password, and you should authenticate using one of the following methods:
- Run
singularity registry login --username myuser docker://myregistry.azurecr.io
to store your credentials for {Singularity}. - Use
docker login --username myuser myregistry.azurecr.io
ifdocker
is on your machine. - Use the
--docker-login
flag for a one-time interactive login. - Set the
SINGULARITY_DOCKER_USERNAME
andSINGULARITY_DOCKER_PASSWORD
environment variables.
The recent repository-scoped access token preview may be more
convenient. See the preview documentation
which details how to use az acr token create
to obtain a token name
and password pair that can be used to authenticate with the above
methods.
By default, singularity pull
from a docker://
URI will attempt to fetch
a container that matches the architecture of your host system. If you need to
retrieve a container that does not have the same architecture as your host (e.g.
an arm64
container on an amd64
host), you can use the --platform
or
--arch
options.
The --platform
option for singularity pull
accepts an OCI platform
string. This has two or three parts, separated by forward slashes (/
):
- An OS value. Only
linux
is supported by {Singularity}. - A CPU architecture value, e.g.
arm64
. - An optional CPU variant, e.g.
v8
.
For example, the platform string for a 32-bit v7 ARM container would be
linux/arm/v7
.
{Singularity} will normalize any platform string you supply, before passing it to the OCI registry, to ensure that it matches intended images.
To pull an Ubuntu image for a 64-bit ARM system from Docker Hub, using the
--platform
option:
$ singularity pull --platform linux/arm64 docker://ubuntu
To pull a 32-bit image for a v7 ARM CPU:
$ singularity pull --platform linux/arm/v7 docker://ubuntu
The --arch
option accepts a CPU architecture only. For example, to pull an
Ubuntu image for a 64-bit ARM system:
$ singularity pull --arch arm64 docker://ubuntu
If you try to run a container that does not match the host CPU architecture, it will likely fail:
$ singularity run ppc64le.sif FATAL: While checking image: could not open image /home/dtrudg-sylabs/Git_Sylabs/singularity-userdocs/ppc64le.sif: the image's architecture (ppc64le) could not run on the host's (amd64)
However, {Singularity} is able to make use of CPU emulation with QEMU, and the Linux kernel's binfmt_misc mechanism, to run containers that do not match the host CPU.
An adminstrator can configure emulation support by installing distribution packages, or using the multiarch/qemu-user-static container from Docker Hub:
$ sudo singularity run docker://multiarch/qemu-user-static --reset -p yes
Note
Running this container with sudo will modify system configuration files, and register binaries on the host.
It is now possible to run containers for other architectures:
# The host system is an AMD64 / x86_64 machine $ uname -m x86_64 # A ppc64le container can be run using emulation $ singularity run ppc64le.sif uname -m ppc64le
Running a container in this manner, using emulation, will be many times slower than running on a system where the CPU architecture matches the container. Emulation is often useful for testing and development purposes, but rarely appropriate when deploying a container to an HPC system.
If you wish to use an existing Docker or OCI container as the basis for a new container, you will need to specify it as the bootstrap source in a {Singularity} definition file.
Just as you can run or pull containers from different registries using a
docker://
URI, you can use different headers in a definition file to
instruct {Singularity} where to find the container you want to use as
the starting point for your build.
Note
OCI-mode doesn't yet support singularity build
. When you build from an
OCI container with {Singularity}, you are always creating a non-OCI
{Singularity} container as output.
When you wish to build from a Docker or OCI container that's hosted in a
registry, such as Docker Hub, your definition file should begin with
Bootstrap: docker
, followed with a From:
line which specifies
the location of the container you wish to pull.
Docker Hub is the default registry, so when building from Docker Hub the
From:
header only needs to specify the container repository and
tag:
Bootstrap: docker
From: ubuntu:20.04
If you singularity build
a definition file with these lines,
{Singularity} will fetch the ubuntu:20.04
container image from
Docker Hub, and extract it as the basis for your new container.
To pull from a different Docker registry, you can either specify the
hostname in the From:
header, or use the separate Registry:
header. The following two examples are equivalent:
Bootstrap: docker
From: quay.io/bitnami/python:3.7
Bootstrap: docker
Registry: quay.io
From: bitnami/python:3.7
If you are building from an image in a private registry you will need to ensure that the credentials needed to access the image are available to {Singularity}.
A build might be run as the root
user, e.g. via sudo
, or under
your own account with --fakeroot
.
If you are running the build as root
, using sudo
, then any stored
credentials or environment variables must be available to the root
user. You
can make the credentials available to the root
user in one of the following
ways:
- Use the
--docker-login
flag for a one-time interactive login. I.E. runsudo singularity build --docker-login myimage.sif Singularity
. - Set the
SINGULARITY_DOCKER_USERNAME
andSINGULARITY_DOCKER_PASSWORD
environment variables. Pass the environment variables through sudo to theroot
build process by runningsudo -E singularity build ...
. - Run
sudo singularity registry login ...
to store your credentials for theroot
user on your system. This is separate from storing the credentials under your own account. - Use
sudo docker login
ifdocker
is on your machine. This is separate from storing the credentials under your own account. - Store the credentials in a custom file on your filesystem using the
registry login --authfile <path>
subcommand, and then pass the same--authfile <path>
flag to thebuild
command. Note, however, that this will store the relevant credentials unencrypted in the specified file, so appropriate care must be taken concerning the location, ownership, and permissions of this file. See the :ref:`documentation of the authfile flag <sec:authfile>` for more information.
If you are running the build under your account via the --fakeroot
feature you do not need to specially set credentials for the root user.
As well as being hosted in a registry, Docker / OCI containers might be found inside a running Docker daemon, or saved as an archive. {Singularity} can build from these locations by using specialized bootstrap agents.
If you have pulled or run a container on your machine under docker
,
it will be cached locally by the Docker daemon. The docker images
command will list containers that are available:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE sylabsio/lolcow latest 5a15b484bc65 2 hours ago 188MB
This indicates that sylabsio/lolcow:latest
has been cached locally
by Docker. You can directly build it into a SIF file using a
docker-daemon:
URI specifying the REPOSITORY:TAG
container
name:
$ singularity build lolcow_from_docker_cache.sif docker-daemon:sylabsio/lolcow:latest INFO: Starting build... Getting image source signatures Copying blob sha256:a2022691bf950a72f9d2d84d557183cb9eee07c065a76485f1695784855c5193 119.83 MiB / 119.83 MiB [==================================================] 6s Copying blob sha256:ae620432889d2553535199dbdd8ba5a264ce85fcdcd5a430974d81fc27c02b45 15.50 KiB / 15.50 KiB [====================================================] 0s Copying blob sha256:c561538251751e3685c7c6e7479d488745455ad7f84e842019dcb452c7b6fecc 14.50 KiB / 14.50 KiB [====================================================] 0s Copying blob sha256:f96e6b25195f1b36ad02598b5d4381e41997c93ce6170cab1b81d9c68c514db0 5.50 KiB / 5.50 KiB [======================================================] 0s Copying blob sha256:7f7a065d245a6501a782bf674f4d7e9d0a62fa6bd212edbf1f17bad0d5cd0bfc 3.00 KiB / 3.00 KiB [======================================================] 0s Copying blob sha256:70ca7d49f8e9c44705431e3dade0636a2156300ae646ff4f09c904c138728839 116.56 MiB / 116.56 MiB [==================================================] 6s Copying config sha256:73d5b1025fbfa138f2cacf45bbf3f61f7de891559fa25b28ab365c7d9c3cbd82 3.33 KiB / 3.33 KiB [======================================================] 0s Writing manifest to image destination Storing signatures INFO: Creating SIF file... INFO: Build complete: lolcow_from_docker_cache.sif
The tag name must be included in the URI. Unlike when pulling from a
registry, the docker-daemon
bootstrap agent will not try to pull a
latest
tag automatically.
Note
In the example above, the build was performed without sudo
. This
is possible only when the user is part of the docker
group on the
host, since {Singularity} must contact the Docker daemon through its
socket. If you are not part of the docker
group you will need to
use sudo
for the build to complete successfully.
To build from an image cached by the Docker daemon in a definition file
use Bootstrap: docker-daemon
, and a From: <REPOSITORY>:TAG
line:
Bootstrap: docker-daemon
From: sylabsio/lolcow:latest
Docker allows containers to be exported into single file tar archives. These cannot be run directly, but are intended to be imported into Docker to run at a later date, or another location. {Singularity} can build from (or run) these archive files, by extracting them as part of the build process.
If an image is listed by the docker images
command, then we can
create a tar archive file using docker save
and the image ID:
$ sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZE sylabsio/lolcow latest 5a15b484bc65 2 hours ago 188MB $ docker save 5a15b484bc65 -o lolcow.tar
If we examine the contents of the tar file we can see that it contains the layers and metadata that make up a Docker container:
$ tar tvf lolcow.tar drwxr-xr-x 0 0 0 0 Aug 16 11:22 2f0514a4c044af1ff4f47a46e14b6d46143044522fcd7a9901124209d16d6171/ -rw-r--r-- 0 0 0 3 Aug 16 11:22 2f0514a4c044af1ff4f47a46e14b6d46143044522fcd7a9901124209d16d6171/VERSION -rw-r--r-- 0 0 0 401 Aug 16 11:22 2f0514a4c044af1ff4f47a46e14b6d46143044522fcd7a9901124209d16d6171/json -rw-r--r-- 0 0 0 75156480 Aug 16 11:22 2f0514a4c044af1ff4f47a46e14b6d46143044522fcd7a9901124209d16d6171/layer.tar -rw-r--r-- 0 0 0 1499 Aug 16 11:22 5a15b484bc657d2b418f2c20628c29945ec19f1a0c019d004eaf0ca1db9f952b.json drwxr-xr-x 0 0 0 0 Aug 16 11:22 af7e389ea6636873dbc5adc17826e8401d96d3d384135b2f9fe990865af202ab/ -rw-r--r-- 0 0 0 3 Aug 16 11:22 af7e389ea6636873dbc5adc17826e8401d96d3d384135b2f9fe990865af202ab/VERSION -rw-r--r-- 0 0 0 946 Aug 16 11:22 af7e389ea6636873dbc5adc17826e8401d96d3d384135b2f9fe990865af202ab/json -rw-r--r-- 0 0 0 118356480 Aug 16 11:22 af7e389ea6636873dbc5adc17826e8401d96d3d384135b2f9fe990865af202ab/layer.tar -rw-r--r-- 0 0 0 266 Dec 31 1969 manifest.json
We can convert this tar file into a singularity container using the
docker-archive
bootstrap agent. Because the agent accesses a file,
rather than an object hosted by a service, it uses :<filename>
, not
://<location>
. To build a tar archive directly to a SIF container:
$ singularity build lolcow_tar.sif docker-archive:lolcow.tar INFO: Starting build... Getting image source signatures Copying blob sha256:2f0514a4c044af1ff4f47a46e14b6d46143044522fcd7a9901124209d16d6171 119.83 MiB / 119.83 MiB [==================================================] 6s Copying blob sha256:af7e389ea6636873dbc5adc17826e8401d96d3d384135b2f9fe990865af202ab 15.50 KiB / 15.50 KiB [====================================================] 0s Copying config sha256:5a15b484bc657d2b418f2c20628c29945ec19f1a0c019d004eaf0ca1db9f952b 3.33 KiB / 3.33 KiB [======================================================] 0s Writing manifest to image destination Storing signatures INFO: Creating SIF file... INFO: Build complete: lolcow_tar.sif
Note
The docker-archive
bootstrap agent can also handle gzipped Docker
archives (.tar.gz
or .tgz
files).
To build an image using a definition file, which starts from a container
in a Docker archive, use Bootstrap: docker-archive
and specify the
filename in the From:
line:
Bootstrap: docker-archive
From: lolcow.tar
Though Docker / OCI container compatibility is a goal of {Singularity}, there are some differences and limitations due to the way {Singularity} was designed to work well on shared systems and HPC clusters, particularly for the native (non-OCI) mode.
If you are having difficulty running a specific Docker container, without
--oci
, check through the list of differences below. There are workarounds
for many of the issues that you are most likely to face. You may also wish to
use OCI-mode for improved compatibility.
{Singularity}'s container image format (SIF) is generally read-only. This permits containers to be run in parallel from a shared location on a network filesystem, support in-built signing and verification, and offer encryption. A container's filesystem is mounted directly from the SIF, as SquashFS, so cannot be written to by default.
When a container is run using Docker its layers are extracted, and the resulting container filesystem can be written to and modified by default. If a Docker container expects to write files, you will need to follow one of the following methods to allow it to run under {Singularity}.
- A directory from the host can be passed into the container with the
--bind
or--mount
flags. It needs to be mounted inside the container at the location where files will be written. - The
--writable-tmpfs
flag can be used to allow files to be created in a special temporary overlay. Any changes are lost when the container exits. The SIF file is never modified. - The container can be converted to a sandbox directory, and executed
with the
--writable
flag, which allows modification of the sandbox content. - A writable overlay partition can be added to the SIF file, and the
container executed with the
--writable
flag. Any changes made are kept permanently in the overlay partition.
Of these methods, only --writable-tmpfs
is always safe to run in
parallel. Each time the container is executed, a separate temporary
overlay is used and then discarded.
Binding a directory into a container, or running a writable sandbox may or may not be safe, depending on the program executed. The program must use, and the filesystem support, some type of locking in order that the parallel runs do not interfere.
A writable overlay file in a SIF partition cannot be used in parallel. {Singularity} will refuse to run concurrently using the same SIF writable overlay partition.
Note
Using --writable-tmpfs as a non-root user requires that {Singularity} was installed in setuid mode, or the system has a kernel version >=5.11 in non-setuid mode.
Using a writable overlay as a non-root user generally requires that {Singularity} was installed in setuid mode.
The --writable-tmpfs
size is controlled by sessiondir max size
in
singularity.conf
. This defaults to 64MiB, and may need to be increased if
your workflows create larger temporary files.
The Dockerfile
used to build a Docker container may contain a
USER
statement. This tells the container runtime that it should run
the container under the specified user account.
Because {Singularity} was designed to provide easy and safe access to data on the host system, in a manner that supports older Linux distributions, it does not permit changing the user account the container is run as.
In the default native mode, any USER
statement in a Dockerfile
will be
ignored by {Singularity} when the container is run. In practice, this often does
not affect the execution of the software in the container. Software that is
written in a way that requires execution under a specific user account will
generally require modification for use with {Singularity}.
Note
The new OCI-mode (--oci
) supports running containers with the USER
requested in a Dockerfile
. It uses newer kernel features to achieve this.
You may wish to use OCI-mode if your system supports it.
{Singularity}'s --fakeroot
mode will start a container as a fake
root
user, mapped to the user's real account outside of the
container. Inside the container it is possible to change to another user
account, which is mapped to a configured range of sub-uids / gids
belonging to the original user. It may be possible to execute software
expecting a fixed user account manually inside a --fakeroot
shell,
if your adminstrator has configured the system for --fakeroot
.
A default installation of {Singularity} will mount the user's home
directory, /tmp
directory, and the current working directory, into
each container that is run. Administrators may also configure e.g. HPC
project directories to automatically bind mount. Docker does not mount
host directories into the container by default.
The home directory mount is the most likely to cause problems when
running Docker containers. Various software will look for packages,
plugins, and configuration files in $HOME
. If you have, for example,
installed packages for Python into your home directory (pip install
--user
) then a Python container may find and attempt to use them. This
can cause conflicts and unexpected behavior.
If you experience issues, use the --contain
option to stop
{Singularity} automatically binding directories into the container. You
may need to use --bind
or --mount
to then add back e.g. an HPC
project directory that you need access to.
# Without --contain, python in the container finds packages # in your $HOME directory. $ singularity exec docker://python:3.9 pip list Package Version ---------- ------- pip 21.2.4 rstcheck 3.3.1 setuptools 57.5.0 wheel 0.37.0 # With --contain, python in the container only finds packages # installed in the container. $ singularity exec --contain docker://python:3.9 pip list Package Version ---------- ------- pip 21.2.4 setuptools 57.5.0 wheel 0.37.0
{Singularity} propagates most environment variables set on the host into the container, by default. Docker does not propagate any host environment variables into the container. Environment variables may change the behaviour of software.
To disable automatic propagation of environment variables, the
--cleanenv / -e
flag can be specified. When --cleanenv
is used,
only variables on the host that are prefixed with SINGULARITYENV_
are set in the container:
# Set a host variable $ export HOST_VAR=123 # Set a singularity container environment variable $ export "SINGULARITYENV_FORCE_VAR="123" $ singularity run library://alpine env | grep VAR FORCE_VAR=123 HOST_VAR=ABC $ singularity run --cleanenv library://alpine env | grep VAR FORCE_VAR=123
Any environment variables set via an ENV
line in a Dockerfile
will be
available when the container is run with {Singularity}. You can override them
with SINGULARITYENV_
vars, or the --env / --env-file
flags, but they
will not be overridden by host environment variables.
For example, the docker://openjdk:latest
container sets JAVA_HOME
:
# Set a host JAVA_HOME export JAVA_HOME=/test # Check JAVA_HOME in the docker container. # This value comes from ENV in the Dockerfile. $ singularity run docker://openjdk:latest echo \$JAVA_HOME /usr/java/openjdk-17 # Override JAVA_HOME in the container export SINGULARITYENV_JAVA_HOME=/test $ singularity run docker://openjdk:latest echo \$JAVA_HOME /test
The default behavior of {Singularity}, in native mode, differs from Docker/OCI handling of environment variables as {Singularity} uses a shell interpreter to process environment on container startup, in a manner that evaluates environment variables. To avoid the extra evaluation of variables that {Singularity} performs you can:
- Follow the instructions in the :ref:`escaping-environment` section to explictly escape environment variables.
- Use the
--no-eval
flag, or--compat
(which enables--no-eval
).
Note
When running a container in OCI-mode (--oci
), {Singularity} follows
Docker/OCI behaviour by default. You do not need to enable the --no-eval
or --compat
options.
--no-eval
prevents {Singularity} from evaluating environment variables on
container startup, so that they will take the same value as with a Docker/OCI
runtime:
# Set an environment variable that would run `date` if evaluated $ export SINGULARITYENV_MYVAR='$(date)' # Default behavior # MYVAR was evaluated in the container, and is set to the output of `date` $ singularity run ~/ubuntu_latest.sif env | grep MYVAR MYVAR=Tue Apr 26 14:37:07 CDT 2022 # --no-eval / --compat behavior # MYVAR was not evaluated and is a literal `$(date)` $ singularity run --no-eval ~/ubuntu_latest.sif env | grep MYVAR MYVAR=$(date)
Because {Singularity} favors an integration over isolation approach it does not, by default, use all the methods through which a container can be isolated from the host system. This makes it much easier to run a {Singularity} container like any other program, while the unique security model ensures safety. You can access the host's network, GPUs, and other devices directly. Processes in the container are not numbered separately from host processes. Hostnames are not changed, etc.
Most containers are not impacted by the differences in isolation. If you require more isolation, than {Singularity} provides by default, you can enable some of the extra namespaces that Docker uses, with flags:
--ipc / -i
creates a separate IPC (inter process communication) namespace, for SystemV IPC objects and POSIX message queues.--net / -n
creates a new network namespace, abstracting the container networking from the host.--userns / -u
runs the container unprivileged, inside a user namespace and avoiding {Singularity}'s setuid setup code. By default, SIF container images will be extracted to disk, as mounting the container filesystem from the SIF requires privilege. An experimental--sif-fuse
flag can be used to perform a mount withsquashfuse
instead, if it is available on your system.--uts
creates a new UTS namespace, which allows a different hostname and/or NIS domain for the container.
To limit presentation of devices from the host into the container, use
the --contain
flag. As well as preventing automatic binds of host
directories into the container, --contain
sets up a minimal /dev
directory, rather than binding in the entire host /dev
tree.
Note
When using the --nv
or --rocm
flags, GPU devices are present
in the container even when --contain
is used.
When a {Singularity} container is run using the --pid / p
option, or
started as an instance (which implies --pid
), a shim init process is
executed that will run the container payload itself.
The shim process helps to ensure signals are propagated correctly from the terminal, or batch schedulers etc. when containers are not designed for interactive use. Because Docker does not provide an init process by default, some containers have been designed to run their own init process, which cannot operate under the control of {Singularity}'s shim.
For example, a container using the tini
init process will produce
warnings when started as an instance, or if run with --pid
. To work
around this, use the --no-init
flag to disable the shim:
$ singularity run --pid tini_example.sif [WARN tini (2690)] Tini is not running as PID 1 . Zombie processes will not be re-parented to Tini, so zombie reaping won't work. To fix the problem, run Tini as PID 1. $ singularity run --pid --no-init tini_example.sif ... # NO WARNINGS
If Docker-like behavior is important, {Singularity} can be started with
the --compat
flag. This flag is a convenient short-hand alternative
to using all of:
--containall
--no-init
--no-umask
--writable-tmpfs
--no-eval
A container run with --compat
has:
- A writable root filesystem, using a temporary overlay where changes are discarded at container exit.
- No automatic bind mounts of
$HOME
or other directories from the host into the container. - Empty temporary
$HOME
and/tmp
directories, the contents of which will be discarded at container exit. - A minimal
/dev
tree, that does not expose host devices inside the container (except GPUs when used with--nv
or--rocm
). - A clean environment, not including environment variables set on the host.
- Its own PID and IPC namespaces.
- No shim init process.
- Argument and environment variable handling matching Docker / OCI runtimes, with respect to evaluation and escaping.
These options will allow most, but not all, Docker / OCI containers to execute correctly under {Singularity}. The user namespace and network namespace are not used, as these negate benefits of SIF and direct access to high performance cluster networks.
Note that behavior in OCI-mode (--oci
) follows that of --compat
, by
default. To emulate traditional {Singularity} behavior in OCI-mode use the
--no-compat
option.
When a container is run using docker
, its default behavior depends
on the CMD
and/or ENTRYPOINT
set in the Dockerfile
that was
used to build it, along with any arguments on the command line. The
CMD
and ENTRYPOINT
can also be overridden by flags.
A {Singularity} container has the concept of a runscript, which is a
single shell script defining what happens when you singularity run
the container. Because there is no internal concept of CMD
and
ENTRYPOINT
, {Singularity} must create a runscript from the CMD
and ENTRYPOINT
when converting a Docker container. The behavior of
this script mirrors Docker as closely as possible.
If the Docker container only has an ENTRYPOINT
- that ENTRYPOINT
is run, with any arguments appended:
# ENTRYPOINT="date" # Runs 'date' $ singularity run mycontainer.sif Wed 06 Oct 2021 02:42:54 PM CDT # Runs 'date --utc` $ singularity run mycontainer.sif --utc Wed 06 Oct 2021 07:44:27 PM UTC
If the Docker container only has a CMD
- the CMD
is run, or is
replaced with any arguments:
# CMD="date" # Runs 'date' $ singularity run mycontainer.sif Wed 06 Oct 2021 02:45:39 PM CDT # Runs 'echo hello' $ singularity run mycontainer.sif echo hello hello
If the Docker container has a CMD
and ENTRYPOINT
, then we run
ENTRYPOINT
with either CMD
as default arguments, or replaced
with any user supplied arguments:
# ENTRYPOINT="date" # CMD="--utc" # Runs 'date --utc' $ singularity run mycontainer.sif Wed 06 Oct 2021 07:48:43 PM UTC # Runs 'date -R' $ singularity run mycontainer.sif -R Wed, 06 Oct 2021 14:49:07 -0500
There is no flag to override an ENTRYPOINT
set for a Docker
container. Instead, use singularity exec
to run an arbitrary program
inside a container.
Because {Singularity} runscripts are evaluated shell scripts, arguments can behave slightly differently than in Docker/OCI runtimes if they contain shell code that may be evaluated.
If you are using a container that was directly built or run from a Docker/OCI
source, with {Singularity} 3.10 or later, the --no-eval
flag will prevent
this extra evaluation so that arguments are handled in a compatible manner:
# docker/OCI behavior $ docker run -it --rm alpine echo "\$HOSTNAME" $HOSTNAME # Singularity default $ singularity run docker://alpine echo "\$HOSTNAME" p700 # Singularity with --no-eval $ singularity run --no-eval docker://alpine echo "\$HOSTNAME" $HOSTNAME
Note
--no-eval
will not change argument behavior for containers built with
{Singularity} 3.9 or earlier, as the handling is implemented in the runscript
that is built into the container.
You can check the version of {Singularity} used to build a container with
singularity inspect mycontainer.sif
.
To avoid evaluation without --no-eval
, and when using containers built with
{Singularity} 3.9 or earlier, you will need to add an extra level of shell
escaping to arguments on the command line:
$ docker run -it --rm alpine echo "\$HOSTNAME" $HOSTNAME $ singularity run docker://alpine echo "\$HOSTNAME" p700 $ singularity run docker://alpine echo "\\\$HOSTNAME" $HOSTNAME
If you are running a binary inside a docker://
container directly,
using the exec
command, the argument handling mirrors Docker/OCI
runtimes as there is no evaluated runscript.
As detailed previously, {Singularity} can make use of most Docker and OCI images without issues, or via simple workarounds. In general, however, there are some best practices that should be applied when creating Docker / OCI containers that will also be run using {Singularity}.
- Don't require execution by a specific user
Avoid using the
USER
instruction in your Docker file, as it is ignored by Singularity. Install and configure software inside the container so that it can be run by any user.
- Don't install software under /root or in another user's home directory
Because a Docker container builds and runs as the
root
user by default, it's tempting to install software into root's home directory (/root
). Permissions on/root
are usually set so that it is inaccessible to non-root users. When the container is run as another user the software may be inaccessible.Software inside another user's home directory, e.g.
/home/myapp
, may be obscured by {Singularity}'s automatic mounts onto/home
.Install software into system-wide locations in the container, such as under
/usr
or/opt
to avoid these issues.
- Support a read-only filesystem
Because of the immutable nature of the SIF format, a container run with {Singularity} is read-only by default.
Try to ensure your container will run with a read-only filesystem. If this is not possible, document exactly where the container needs to write, so that a user can bind in a writable location, or use
--writable-tmpfs
as appropriate.You can test read-only execution with Docker using
docker run --read-only --tmpfs /run --tmpfs /tmp sylabsio/lolcow
.
- Be careful writing to /tmp
{Singularity} mounts the host
/tmp
into the container, by default. This means you must be be careful when writing sensitive information to/tmp
, and should ensure your container cleans up files it writes there.
- Consider library caches / ldconfig
If your
Dockerfile
adds libraries and / or manipulates the ld search path in the container (ld.so.conf
/ld.so.conf.d
), you should ensure the library cache is updated during the build.Because Singularity runs containers read-only by default, the cache and any missing library symlinks may not be able to be updated / created at execution time.
Run
ldconfig
toward the end of yourDockerfile
to ensure symbolic links and the theld.so.cache
are up-to-date.
If you experience problems pulling containers from a private registry,
check your credentials carefully. You can singularity pull
with the
--docker-login
flag to perform an interactive login. This may be
useful if you are unsure whether you have stored credentials properly
via singularity registry login
or docker login
.
OCI registries expect different values for username and password fields.
Some require a token to be generated and used instead of your account
password. Some take a generic username, and rely only on the token to
identify you. Consult the documentation for your registry carefully.
Look for instructions that detail how to login via docker login
without external helper programs, if possible.
If a Docker container fails to start, the most common cause is that it needs to write files, while {Singularity} runs read-only by default.
Try running with the --writable-tmpfs
option, or the --compat
flag (which enables additional compatibility fixes).
You can also look for error messages mentioning 'permission denied' or
'read-only filesystem'. Note where the program is attempting to write,
and use --bind
or --mount
to bind a directory from the host
system into that location. This will allow the container to write the
needed files, which will appear in the directory you bind in.
If a Docker container runs, but exhibits unexpected behavior, the most likely cause is the different level of isolation that Singularity provides vs Docker.
Try running the container with the --contain
option, or the
--compat
option (which is more strict). This disables the automatic
mount of your home directory, which is a common source of issues where
software in the container loads configuration or packages that may be
present there.
The community Slack channels and mailing list are excellent places to ask for help with running a specific Docker container. Other users may have already had success running the same container or software. Please don't report issues with specific Docker containers on GitHub, unless you believe they are due to a bug in {Singularity}.
An alternative to running Docker containers with {Singularity} is to
re-write the Dockerfile
as a definition file, and build a native SIF
image.
The table below gives a quick reference comparing Dockerfile and {Singularity} definition files. For more detail please see :ref:`definition-files`.
{Singularity} Definition file | Dockerfile | ||
---|---|---|---|
Section | Description | Section | Description |
Bootstrap |
Defines the source of
the base image to build
your container from.
Many bootstrap agents
are supported, e.g.
library , docker ,http , shub ,yum , debootstrap . |
- | Can only bootstrap
from Docker Hub.
|
From: |
Specifies the base
image from which to the
build the container.
|
FROM |
Creates a layer from
the specified docker image.
|
%setup |
Run setup commands
outside of the
container (on the host
system) after the base
image bootstrap.
|
- | Not supported.
|
%files |
Copy files from
your host to
the container, or
between build stages.
|
COPY |
Copy files from
your host to
the container, or
between build stages.
|
%environment |
Declare and set
container environment
variables.
|
ENV |
Declare and set
a container environment
variable.
|
%help |
Provide a help
section for your
container image.
|
- | Not supported.
|
%post |
Commands that will
be run at
build-time.
|
RUN |
Commands that will
be run at
build-time.
|
%runscript |
Commands that will
be run when you
singularity run the container image.
|
ENTRYPOINT
CMD |
Commands / arguments
that will run in the
container image.
|
%startscript |
Commands that will
be run when
an instance is started.
|
- | Not Applicable.
|
%test |
Commands that run
at the very end
of the build process
to validate the
container using
a method of your
choice. (to verify
distribution or
software versions
installed inside
the container)
|
HEALTHCHECK |
Commands that verify
the health status of
the container.
|
%apps |
Allows you to install
internal modules
based on the concept
of SCIF-apps.
|
- | Not supported.
|
%labels |
Section to add and
define metadata
describing your
container.
|
LABEL |
Declare container
metadata as a
key-value pair.
|