This repository contains the source for the docker
snap package. The package provides a distribution of Docker Community Edition (CE) for Ubuntu Core 16 (and other snap-compatible) systems. It is built from an upstream Docker CE release tag with some patches to fit the snap format and is available on armhf
, arm64
, amd64
, i386
, and ppc64el
architectures. The rest of this page describes installation, usage, and development.
NOTE: Docker's official documentation (https://docs.docker.com) does not yet discuss the
docker
snap package.
To install the latest stable release of Docker CE using snap
:
sudo snap install docker
If you are using Ubuntu Core 16,
- Connect the
docker:home
plug as it's not auto-connected by default:
sudo snap connect docker:home
If you are using an alternative snap-compatible Linux distribution ("classic" in snap lingo), and would like to run docker
as a normal user:
- Create and join the
docker
group.
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
- You will also need to disable and re-enable the
docker
snap if you added the group while it was running.
sudo snap disable docker
sudo snap enable docker
Docker should function normally, with the following caveats:
-
All files that
docker
needs access to should live within your$HOME
folder.- If you are using Ubuntu Core 16, you'll need to work within a subfolder of
$HOME
that is readable by root. docker-archive/docker-snap#8
- If you are using Ubuntu Core 16, you'll need to work within a subfolder of
-
Additional certificates used by the Docker daemon to authenticate with registries need to be located in
/var/snap/docker/common/etc/certs.d
instead of/etc/docker/certs.d
. -
Specifying the option
--security-opt="no-new-privileges=true"
with thedocker run
command (or the equivalent in docker-compose) will result in a failure of the container to start. This is due to an an underlying external constraint on AppArmor (see https://bugs.launchpad.net/snappy/+bug/1908448 for details).
If the system is found to have an nvidia graphics card available, and the host has the required nvidia libraries installed, the nvidia container toolkit will be setup and configured to enable use of the local GPU from docker. This can be used to enable use of CUDA from a docker container, for instance.
To enable proper use of the GPU within docker, the nvidia runtime must be used. By default, the nvidia runtime will be configured to use CDI mode, and a the appropriate nvidia CDI config will be automatically created for the system. You just need to specify the nvidia runtime when running a container.
The required nvidia libraries are available in the nvidia-core22 snap.
This requires connection of the graphics-core22 content interface provided by the nvidia-core22 snap, which should be automatically connected once installed.
The required nvidia libraries are available in the nvidia container toolkit packages.
Instruction on how to install them can be found (here)
If you want to make some adjustments to the automatically generated runtime config, you can use the nvidia-support.runtime.config-override
snap config to completely replace it.
snap set docker nvidia-support.runtime.config-override="$(cat cutom-nvidia-config.toml)"
By default, the device-name-strategy
for the CDI config will use index
. Optionally, you can specify an alternative from the currently supported:
index
uuid
type-index
snap set docker nvidia-support.cdi.device-name-strategy=uuid
Setting up the nvidia support should be automatic the hardware is present, but you may wish to specifically disable it so that setup is not even attempted. You can do so via the following snap config:
snap set docker nvidia-support.disabled=true
Generic example usage would look something like:
docker run --rm --runtime nvidia --gpus all {cuda-container-image-name}
or
docker run --rm --runtime nvidia --env NVIDIA_VISIBLE_DEVICES=all {cuda-container-image-name}
If your container image already has appropriate environment variables set, may be able to just specify the nvidia runtime with no additional args required.
Please refer to this guide for mode detail regarding environment variables that can be used.
NOTE: library path and discovery is automatically handled, but binary paths are not, so if you wish to test using something like the nvidia-smi
binary passed into the container from the host, you could either specify the full path or set the PATH environment variable.
e.g.
docker run --rm --runtime=nvidia --gpus all --env PATH="${PATH}:/var/lib/snapd/hostfs/usr/bin" ubuntu nvidia-smi
Developing the docker
snap package is typically performed on a "classic" Ubuntu distribution. The instructions here are written for Ubuntu 16.04 "Xenial".
- Install the snap tooling (requires
snapd>2.21
andsnapcraft>=2.26
):
sudo apt-get install snapd snapcraft
sudo snap install core
- Checkout this repository and build the
docker
snap package:
git clone https://github.com/docker/docker-snap
cd docker-snap
sudo snapcraft
- Install the newly-created snap package:
sudo snap install --dangerous docker_[VER]_[ARCH].snap
- Manually connect the relevant plugs and slots which are not auto-connected:
sudo snap connect docker:privileged :docker-support
sudo snap connect docker:support :docker-support
sudo snap connect docker:firewall-control :firewall-control
sudo snap connect docker:docker-cli docker:docker-daemon
sudo snap disable docker
sudo snap enable docker
You should end up with output similar to:
sudo snap interfaces docker
Slot Plug
:docker-support docker:privileged,docker:support
:firewall-control docker
:home docker
:network docker
:network-bind docker
docker:docker-daemon docker:docker-cli
We rely on spread (https://github.com/snapcore/spread) to run full-system test on Ubuntu Core 16. We also provide a utility script (run-spread-test.sh) to launch the spread test. It will
- Fetch primary snaps( kernel, core, gadget) and build custom Ubuntu Core image with them
- Boot the image in qemu emulator
- Deploy test suits in emulation environment
- Execute full-system testing
Firstly, install ubuntu-image tool since we need to create a custom Ubuntu Core image during test preparation.
sudo snap install --beta --classic ubuntu-image
Secondly, install qemu-kvm package since we use it as the backend to run the spread test.
sudo apt install qemu-kvm
Meanwhile, you need a classic-mode supported spread binary to launch kvm from its context. You can either build spread from this branch or download the spread snap package here.
sudo snap install --classic --dangerous spread_2017.05.24_amd64.snap
You may build the docker snap locally in advance and then execute the spread tests with the following commands:
snapcraft
./run-spread-tests.sh
When doing a local build, you can also specify --test-from-channel to fetch the snap from the specific channel of the store. The snap from candidate
channel is used by default as test target if --channel
option is not specified.
./run-spread-tests.sh --test-from-channel --channel=stable
In order to run an individual spread test, please run the following command:
spread spread/main/installation
This will run the test case under spread/main/installation folder.
You can specify the SNAP_CHANNEL
environment variable to install a snap from a specific channel for the testing as well.
SNAP_CHANNEL=candidate spread spread/main/update_policy