SingularityCE 3.9.0 Release Candidate 1
Pre-releaseThis is the first release candidate for the upcoming SingularityCE 3.9.0. We'd be grateful all testing, bug reports, and comments, as we look forward to a stable 3.9.0 release.
Various behavior changes and new features have been introduced. Please carefully review the release notes below, and refer to the 'master branch (unreleased)' documentation at https://sylabs.io/docs/
Changed defaults / behaviours
- Building SingularityCE 3.9.0 requires go >=1.16. We now aim to support the two most recent stable versions of Go. This corresponds to the Go Release Maintenance Policy and Security Policy, ensuring critical bug fixes and security patches are available for all supported language versions.
- LABELs from Docker/OCI images are now inherited. This fixes a longstanding regression from Singularity 2.x. Note that you will now need to use
--force
in a build to override a label that already exists in the source Docker/OCI container. - The source paths for
%files
lines in a definition file are no longer interpreted by a shell. This means that environment variable substitution is not performed. Previously, environment variables were substituted for source paths, but not destination paths, leading to unexpected copy behaviour. Globbing for source files will now follow the Gofilepath.Match
pattern
syntax. - Removed
--nonet
flag, which was intended to disable networking for in-VM execution, but has no effect. --nohttps
flag has been deprecated in favour of--no-https
. The old flag is still accepted, but will display a deprecation warning.- Paths for
cryptsetup
,go
,ldconfig
,mksquashfs
,nvidia-container-cli
,unsquashfs
are now found at build time bymconfig
and written intosingularity.conf
. The path to these executables can be overridden by changing the value insingularity.conf
. If the path is not set insingularity.conf
then the the executable will be found by searching$PATH
. - When calling
ldconfig
to find GPU libraries, singularity will not fall back to/sbin/ldconfig
if theldconfig
on$PATH
errors. If installing in a Guix/Nix on environment on top of a standard host distribution you must setldconfig path = /sbin/ldconfig
to use the host distributionldconfig
to find GPU libraries. --nv
will not callnvidia-container-cli
to find host libraries, unless the new experimental GPU setup flow that employsnvidia-container-cli
for all GPU related operations is enabled (see below).- If a container is run with
--nvcli
and--contain
, only GPU devices specified via theNVIDIA_VISIBLE_DEVICES
environment variable will be exposed within the container. UseNVIDIA_VISIBLE_DEVICES=all
to access all GPUs inside a container run with--nvccli
. - Example log-plugin rewritten as a CLI callback that can log all commands executed, instead of only container execution, and has access to command arguments.
- An invalid remote build source (bootstrap) will be identified before attempting to submit the build.
- The bundled reference CNI plugins are updated to v1.0.1. The
flannel
plugin is no longer included, as it is maintained as a separate plugin at: https://github.com/flannel-io/cni-plugin. If you use the flannel CNI plugin you should install it from this repository. - Instances are no longer created with an IPC namespace by default. An IPC namespace can be specified with the
-i|--ipc
flag.
New features / functionalities
--writable-tmpfs
can be used withsingularity build
to run the%test
section of the build with a ephemeral tmpfs overlay, permitting tests that write to the container filesystem.--compat
flag for actions is a new short-hand to enable a number of options that increase OCI/Docker compatibility. Infers--containall, --no-init, --no-umask, --writable-tmpfs
. Does not use user, uts, or network namespaces as these may not be supported on many installations.--no-https
now applies to connections made to library services specified in--library://<hostname>/...
URIs.remote add --insecure
may be used to configure endpoints that are only accessible via http.- The experimental
--nvccli
flag will usenvidia-container-cli
to setup the container for Nvidia GPU operation. SingularityCE will not bind GPU libraries itself. Environment variables that are used with Nvidia'sdocker-nvidia
runtime to configure GPU visibility / driver capabilities & requirements are parsed by the--nvccli
flag from the environment of the calling user. By
default, thecompute
andutility
GPU capabilities are configured. Theuse nvidia-container-cli
option insingularity.conf
can be set toyes
to always usenvidia-container-cli
when supported. Note that in a setuid
install,nvidia-container-cli
will be run as root with required ambient capabilities.--nvccli
is not currently supported in the hybrid fakeroot (setuid install +--fakeroot
) workflow. Please see documentation for more details. - The
--apply-cgroups
flag can be used to apply cgroups resource and device restrictions on a system using the v2 unified cgroups hierarchy. The resource restrictions must still be specified in the v1 / OCI format, which will be translated into v2 cgroups resource restrictions, and eBPF device restrictions. - A new
--mount
flag andSINGULARITY_MOUNT
environment variable can be used to specify bind mounts intype=bind,source=<src>,destination=<dst>[,options...]
format. This improves CLI compatibility with other runtimes, and allows binding paths containing:
and,
characters (using CSV style escaping).
Bug fixes
- The
oci
commands will operate on systems that use the v2 unified cgroups hierarchy.
Thanks / Reporting Bugs
Thanks to our contributors for code, feedback and, testing efforts!
As always, please report any bugs to: https://github.com/sylabs/singularity/issues/new
If you think that you've discovered a security vulnerability please report it to: [email protected]
Have fun!
Downloads
Please use the singularity-ce-3.9.0-rc.1.tar.gz download below to obtain and install SingularityCE 3.9.0-rc.1. The GitHub auto-generated 'Source Code' downloads do not include required dependencies etc.