Docker - Build, Ship, and Run any App, Anywhere

mail

Docker and AppArmor

Looks like both would fit pretty well with each other. So what's the problem ?

Containers are NOT VMs : the kernel is shared between the container and its host.
This means :
  • kernel options enabled on the host are/may be enabled on the container as well
  • you can not enable a kernel option in a container if it's not already enabled in the host
Generally speaking, fiddling with kernel options in the context of containers goes beyond what containers are for.
mail

Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration.

Setup :

  1. make sure you have installed the Docker engine
  2. As root :
    localDockerComposeBinary='/usr/local/bin/docker-compose'; curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o "$localDockerComposeBinary" && chmod +x "$localDockerComposeBinary"; curl -L https://raw.githubusercontent.com/docker/compose/1.24.1/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
  3. check installation as a non-root user :
    docker-compose --version
    docker-compose version 1.24.1, build 4667896b

Let's build things with it (source) :

Target : have 4 containers (based on this) : 2 web and 2 DB. We'll use these with Ansible later.

Our working environment (source) :

workDir='./dockerCompose'; mkdir -p "$workDir" && cd "$workDir"

The Dockerfile : ./workDir/Dockerfile (source) :

We'll actually re-use this one.

The Compose file : ./workDir/docker-compose.yml (source) :

version: '3.7'
services:

  ubuntu_web1:
    hostname: web1
    build: .				this service uses the image built from the Dockerfile found in this directory (details)
    container_name: c_ubuntu_web1
    tty: true

  ubuntu_web2:
    hostname: web2
    build: .
    container_name: c_ubuntu_web2
    tty: true

  ubuntu_db1:
    hostname: db1
    build: .
    container_name: c_ubuntu_db1
    tty: true

  ubuntu_db2:
    hostname: db2
    build: .
    container_name: c_ubuntu_db2
    tty: true

Build and run (source) :

docker-compose up
These 11 steps are actually repeated for each configured service :
Building ubuntu_db2
Step 1/11 : FROM ubuntu:16.04
 ---> 5e13f8dd4c1a
Step 2/11 : ARG userName='ansible'
 ---> Using cache
 ---> 7f163145edfd
Step 3/11 : ARG homeDir="/home/$userName"
 ---> Using cache
 ---> 9025b899c1df
Step 4/11 : ARG sshDir="$homeDir/.ssh"
 ---> Using cache
 ---> b32d4b7d1abf
Step 5/11 : ARG authorizedKeysFile="$sshDir/authorized_keys"
 ---> Using cache
 ---> 8477ab5bcac4
Step 6/11 : ARG publicSshKey='./ansible.pub'
 ---> Using cache
 ---> 5b2235eb9fbf
Step 7/11 : RUN apt-get update &&       apt-get install -y iproute2 iputils-ping openssh-server &&      apt-get clean &&        useradd -d "$homeDir" -s /bin/bash -m "$userName" &&    mkdir -p "$sshDir"
 ---> Using cache
 ---> 41cfeaaa7dda
Step 8/11 : COPY "$publicSshKey" "$authorizedKeysFile"		this step doesn't like it when source files are symlinks
 ---> Using cache
 ---> 3dcab2010a2b
Step 9/11 : RUN chown -R "$userName":"$userName" "$sshDir" &&   chmod 700 "$sshDir" &&  chmod 600 "$authorizedKeysFile"
 ---> Using cache
 ---> 1f511354cd72
Step 10/11 : EXPOSE 22
 ---> Using cache
 ---> deb5199624a4
Step 11/11 : CMD [ "sh", "-c", "service ssh start; bash"]
 ---> Using cache
 ---> 3bb266505c4c

Successfully built 3bb266505c4c
Successfully tagged dockercompose_ubuntu_db2:latest
WARNING: Image for service ubuntu_db2 was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up --build.
Then :
Creating c_ubuntu_web1 ... done
Creating c_ubuntu_db1  ... done
Creating c_ubuntu_web2 ... done
Creating c_ubuntu_db2  ... done
Attaching to c_ubuntu_db1, c_ubuntu_web1, c_ubuntu_web2, c_ubuntu_db2
c_ubuntu_db1   |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
c_ubuntu_web1  |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
c_ubuntu_web2  |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
c_ubuntu_db2   |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
(No prompt given at that time, so Ctrl-z + bg to get it back)
docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS               NAMES
76f413236d70        dockercompose_ubuntu_db2    "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_db2
240e1505313e        dockercompose_ubuntu_web2   "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_web2
b5b6887d2b02        dockercompose_ubuntu_db1    "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_db1
95f988a02ff2        dockercompose_ubuntu_web1   "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_web1
docker image ls
REPOSITORY                  TAG                 IMAGE ID            CREATED            SIZE
dockercompose_ubuntu_db1    latest              3bb266505c4c        4 minutes ago      205MB
dockercompose_ubuntu_db2    latest              3bb266505c4c        4 minutes ago      205MB
dockercompose_ubuntu_web1   latest              3bb266505c4c        4 minutes ago      205MB
dockercompose_ubuntu_web2   latest              3bb266505c4c        4 minutes ago      205MB
Maybe it was not necessary to build 4 distinct images that are actually identical...

Further steps :

  • To stop them all :
    docker-compose stop
  • To remove all containers and all images (source) :
    docker rm $(docker ps -a -q) && docker rmi $(docker images -q) --force
    --force will remove everything, including images that may not be related with your current work.
  • You may now alter the Dockerfile or the Compose file and rebuild :
    docker-compose up --build
  • To start everything again AND still get the shell prompt back :
    docker-compose up -d
    Creating c_ubuntu_db1  ... done
    Creating c_ubuntu_web1 ... done
    Creating c_ubuntu_web2 ... done
    Creating c_ubuntu_db2  ... done

Now let's use this "virtualized infrastructure" :

mail

How to get a container's IP address ?

mail

How to simulate a VM with a container ?

  • This may not be the best —or even a good— approach : Docker newbie here. Feel free to send feedback or advice using the envelope on the top-right corner of this article .
  • Everything below is made without security in mind, this shouldn't go further than dev/testing environments.

Target : an Ubuntu machine on which I can log in via ssh to play with Ansible.

There's an advanced version of this procedure describing the setup with SSH keys.

Build a custom Docker image (source) :

  1. cd my/working/directory; touch Dockerfile
  2. edit Dockerfile :
    FROM ubuntu:16.04
    
    RUN apt-get update && \
    	apt-get install -y iproute2 iputils-ping openssh-server && \
    	apt-get clean && \
    	useradd -d /home/ansible -s /bin/bash -m ansible && \
    	echo ansible:elbisna | chpasswd
    
    EXPOSE 22
    
    CMD [ "sh", "-c", "service ssh start; bash"]
  3. build the image (source) :
    • docker build -t imagename .
      imagename must be all lowercase
    • docker build -t myubuntu .
      Sending build context to Docker daemon   2.56kB
      Step 1/4 : FROM ubuntu:16.04
       ---> 5e13f8dd4c1a
      Step 2/4 : RUN apt-get update &&     apt-get install -y iproute2 iputils-ping openssh-server &&     apt-get clean &&     useradd -d /home/ansible -s /bin/bash -m ansible &&    echo ansible:elbisna | chpasswd
       ---> Running in 9f0ef385f89b
      Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
      Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
      ...
      ...
      ...
      Step 4/4 : CMD [ "sh", "-c", "service ssh start; bash"]
       ---> Running in a377195f387b
      Removing intermediate container a377195f387b
       ---> 65887834663f
      Successfully built 65887834663f
      Successfully tagged myubuntu:latest
  4. check it :
    docker image ls
    REPOSITORY	TAG	IMAGE ID	CREATED		SIZE
    myubuntu	latest	65887834663f	2 minutes ago	205MB

Run this image (i.e. create a container) :

A few things I learnt while experimenting this part :

  • You can't create an image with an already running process (such as sshd) since an image is just a static file. Instead, you'll have to "instantiate" that image (i.e. create a container) and start a process within this container.
  • You could do so with :
    docker run [options] imagename service ssh start
    but ...
  • A Docker container exits when its main process finishes (details). So if you're running something like :
    docker run [options] imagename command
    the container will start, run command and exit. (This is why I used this hack)

Actually run the image :

  1. create and start a container :
    docker run -dit --name myubuntu1 myubuntu
    docker run -dit --name myubuntu1 --hostname hostname myubuntu
    a914ca30d7181e7d39a272f633a8e180151c2b441845d0eae882bd7d8b16fdff
  2. log into it :
    docker attach myubuntu1
    root@a914ca30d718:/#
    root@hostname:/#
  3. get the container's IP address (there is a simpler solution) :
    root@a914ca30d718:/# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    	link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    	inet 127.0.0.1/8 scope host lo
    	   valid_lft forever preferred_lft forever
    141: eth0@if142: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    	link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    	inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
    	   valid_lft forever preferred_lft forever
  4. make sure sshd is running :
    root@a914ca30d718:/# service ssh status
     * sshd is running
    root@a914ca30d718:/# ss -punta
    Netid	State	Recv-Q	Send-Q	Local Address:Port	Peer Address:Port
    tcp	LISTEN	0	128		    *:22		   *:*		users:(("sshd",pid=31,fd=3))
    tcp	LISTEN	0	128		   :::22		  :::*		users:(("sshd",pid=31,fd=4))
  5. disconnect from the container while keeping it alive : as seen above, ending the main process of the container also "exits" the container. So, once logged in, exit (or any equivalent shortcut such as Ctrl-d) ends Bash, which ends the container.
    • Do : Ctrl-p-q (source)
      root@a914ca30d718:/# read escape sequence
      then back to your system prompt
    • If you did the wrong thing () :
      1. see the mess you've created :
        docker ps -a
        CONTAINER ID	IMAGE		COMMAND			CREATED		STATUS				PORTS	NAMES
        a914ca30d718	myubuntu	"sh -c 'service ssh …"	43 minutes ago	Exited (0) 6 seconds ago		myubuntu1
      2. start it again :
        docker start myubuntu1
        myubuntu1
      3. docker ps
        CONTAINER ID	IMAGE		COMMAND			CREATED		STATUS		PORTS	NAMES
        a914ca30d718	myubuntu	"sh -c 'service ssh …"	45 minutes ago	Up 24 seconds	22/tcp	myubuntu1
      4. re-attach to it :
        docker attach myubuntu1
        root@a914ca30d718:/#
      5. you can now leave nicely
  6. connect via SSH into the container :
    ssh ansible@172.17.0.3
    ansible@a914ca30d718:~$
    This time, since you're connecting through SSH, you can leave with exit.

Advanced version : let's use SSH keys rather than passwords :

Since the general procedure is pretty similar, I won't give full details. Please refer to the steps above.
  1. edit Dockerfile :
    FROM ubuntu:16.04
    
    ARG userName='ansible'
    ARG homeDir="/home/$userName"
    ARG sshDir="$homeDir/.ssh"
    ARG authorizedKeysFile="$sshDir/authorized_keys"
    ARG publicSshKey='./ansible.pub'
    
    RUN apt-get update && \
    	apt-get install -y iproute2 iputils-ping openssh-server && \
    	apt-get clean && \
    	useradd -d "$homeDir" -s /bin/bash -m "$userName" && \
    	mkdir -p "$sshDir"
    
    COPY "$publicSshKey" "$authorizedKeysFile"
    
    RUN chown -R "$userName":"$userName" "$sshDir" && \
    	chmod 700 "$sshDir" && \
    	chmod 600 "$authorizedKeysFile"
    
    EXPOSE 22
    
    CMD [ "sh", "-c", "service ssh start; bash"]
  2. build the image
  3. start a container
  4. log in via SSH :
    ssh -i ansible ansible@$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' myubuntu1)
mail

Install Docker CE (Community Edition) on Debian

Procedure (source) :

As root :
  1. apt update && apt upgrade
  2. curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
    OK
  3. apt-key fingerprint 0EBFCD88
    pub   rsa4096 2017-02-22 [SCEA]
          9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
    uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
    sub   rsa4096 2017-02-22 [S]
  4. add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
  5. apt update && apt install docker-ce docker-ce-cli containerd.io
  6. check installation :
    docker run hello-world
    Unable to find image 'hello-world:latest' locally		you can safely ignore this line since it's the 1st launch
    latest: Pulling from library/hello-world
    1b930d010525: Pull complete
    Digest: sha256:451ce787d12369c5df2a32c85e5a03d52cbcef6eb3586dd03075f3034f10adcd
    Status: Downloaded newer image for hello-world:latest
    
    Hello from Docker!
    This message shows that your installation appears to be working correctly.		
    
    To generate this message, Docker took the following steps:
     1. The Docker client contacted the Docker daemon.
     2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
        (amd64)
     3. The Docker daemon created a new container from that image which runs the
        executable that produces the output you are currently reading.
     4. The Docker daemon streamed that output to the Docker client, which sent it
        to your terminal.
    
    ...
    (some extra lines not interesting here)
    ...
    

Post-install steps (source) :

  1. As root (this is the For the impatients version. See the source documentation for details) :
    dockerGroupName='docker'; nonRootUser='bob'; getent group "$dockerGroupName" || groupapp "$dockerGroupName"; usermod -aG "$dockerGroupName" "$nonRootUser"; echo "User '$nonRootUser' must log out+in for changes to take effect."
  2. check installation, as the non-root user defined above :
    docker run hello-world
    Hello from Docker!
    This message shows that your installation appears to be working correctly.
    
    
mail

How to log (i.e. open a shell) into a container ?

All methods below describe root login to a container.

  • assuming there is only one running container :
    docker exec -it $(docker ps | awk '!/^CONTAINER ID/ {print $1}') bash
  • referring to the last created container :
    docker exec -it $(docker ps -q -l) bash
  • referring to the container by name :
    1. Add to ~/.bash_aliases :
      # "Log As root In Container"
      laric() {
      	containerName=$1
      	[ -z "$containerName" ] && {
      		cat <<-EOF
      			No container name specified. Must be one of :
      			$(docker ps | awk '!/NAMES/ {print "\t"$NF}' | sort)
      			EOF
      		return
      		}
      	containerId=$(docker ps | awk -v containerName="$containerName" '$NF==containerName {print $1}')
      	docker exec -it "$containerId" bash
      	}
    2. reload aliases :
      source ~/.bash_aliases
    3. log into myContainer :
      laric myContainer

Detailed procedure :

  1. get the container ID :
    docker ps
    CONTAINER ID	IMAGE				COMMAND			CREATED		STATUS			PORTS		NAMES
    4bac82b3ba41	molecule_local/ubuntu:18.04	"bash -c 'while true…"	28 minutes ago	Up 28 minutes				ubuntu1804
  2. run a single command inside the container :
    docker exec -it 4bac82b3ba41 hostname
    ubuntu1804
  3. to actually log into the container, just run a shell :
    docker exec -it 4bac82b3ba41 bash
    root@ubuntu1804:/# 
mail

How to start / stop / restart / pause / ... a container ?

  1. list running containers
    docker ps
    CONTAINER ID	IMAGE				COMMAND			CREATED		STATUS			PORTS		NAMES
    4bac82b3ba41	molecule_local/ubuntu:18.04	"bash -c 'while true…"	28 minutes ago	Up 28 minutes				ubuntu1804
    51a02953b18a	molecule_local/centos:7		"bash -c 'while true…"	4 hours ago	Up 4 hours				instance
  2. pause a container
    docker pause 51a02953b18a
    51a02953b18a
    docker ps
    CONTAINER ID	IMAGE				COMMAND			CREATED		STATUS			PORTS		NAMES
    4bac82b3ba41	molecule_local/ubuntu:18.04	"bash -c 'while true…"	28 minutes ago	Up 28 minutes				ubuntu1804
    51a02953b18a	molecule_local/centos:7		"bash -c 'while true…"	4 hours ago	Up 4 hours (Paused)			instance
  3. resume it
    docker unpause 51a02953b18a
    51a02953b18a
    docker ps
    CONTAINER ID	IMAGE				COMMAND			CREATED		STATUS			PORTS		NAMES
    4bac82b3ba41	molecule_local/ubuntu:18.04	"bash -c 'while true…"	28 minutes ago	Up 28 minutes				ubuntu1804
    51a02953b18a	molecule_local/centos:7		"bash -c 'while true…"	4 hours ago	Up 4 hours				instance
  4. stop it
    docker stop 51a02953b18a
    51a02953b18a
    docker ps
    CONTAINER ID	IMAGE				COMMAND			CREATED		STATUS			PORTS		NAMES
    4bac82b3ba41	molecule_local/ubuntu:18.04	"bash -c 'while true…"	28 minutes ago	Up 28 minutes				ubuntu1804
    Not listed anymore because docker ps only lists running containers.
    docker ps -a
    CONTAINER ID	IMAGE				COMMAND			CREATED		STATUS			PORTS		NAMES
    4bac82b3ba41	molecule_local/ubuntu:18.04	"bash -c 'while true…"	28 minutes ago	Up 28 minutes				ubuntu1804
    51a02953b18a	molecule_local/centos:7		"bash -c 'while true…"	Exited (137) 13 minutes ago				instance
mail

Command "daemon" is deprecated, and will be removed in Docker 17.12. Please run `dockerd` directly

Situation :

  1. Everything looks fine when starting Docker :
    systemctl status docker.service
     docker.service - Docker Application Container Engine
       Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled)
      Drop-In: /etc/systemd/system/docker.service.d
               └─proxy.conf
       Active: active (running) since Wed 2018-01-24 12:46:59 CET; 11s ago		use this with --since below
         Docs: https://docs.docker.com
     Main PID: 4777 (dockerd)
        Tasks: 17
       Memory: 30.6M
          CPU: 362ms
       CGroup: /system.slice/docker.service
               ├─4777 dockerd -H fd://
               └─4786 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/container
    
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.199165764+01:00" level=warning msg="Your kernel does not support swap memory limit"
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.199209411+01:00" level=warning msg="Your kernel does not support cgroup rt period"
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.199219535+01:00" level=warning msg="Your kernel does not support cgroup rt runtime"
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.199544739+01:00" level=info msg="Loading containers: start."
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.378657775+01:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip ca
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.443060341+01:00" level=info msg="Loading containers: done."
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.453716413+01:00" level=info msg="Daemon has completed initialization"
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.453767567+01:00" level=info msg="Docker daemon" commit=89658be graphdriver=overlay version=17.05.0-ce
    Jan 24 12:46:59 myServer docker[4777]: time="2018-01-24T12:46:59.462163955+01:00" level=info msg="API listen on /var/run/docker.sock"
    Jan 24 12:46:59 myServer systemd[1]: Started Docker Application Container Engine.
  2. ... but :
    journalctl -u docker --since '2018-01-24 12:46'
    Jan 24 12:46:58 myServer systemd[1]: Starting Docker Application Container Engine...
    Jan 24 12:46:58 myServer docker[4777]: Command "daemon" is deprecated, and will be removed in Docker 17.12. Please run `dockerd` directly.
    

Solution :

  1. You'll have to edit a configuration file which is read by systemd while starting Docker. On my side, this was /etc/systemd/system/docker.service.d/proxy.conf (not installed / configured by myself), but /etc/systemd/system/docker.service.d/whatever.conf should do the trick
  2. Change the line :
    ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS
    into :
    ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS
  3. Make systemd aware of the change :
    systemctl daemon-reload
  4. Restart Docker (date is there to filter logs at the next step) :
    date; systemctl restart docker.service
    Wed Jan 24 12:49:53 CET 2018
    Nothing more, meaning Docker started successfully
  5. journalctl -u docker --since '2018-01-24 12:49' | grep deprecated
    Nothing found : the warning has disappeared.
mail

Error starting daemon: error initializing graphdriver: driver not supported

Situation :

systemctl start docker.service
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
systemctl status docker.service
 docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/docker.service.d
           └─proxy.conf
   Active: failed (Result: exit-code) since Wed 2018-01-24 12:09:21 CET; 15s ago		use this with --since below
     Docs: https://docs.docker.com
  Process: 4216 ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE)
 Main PID: 4216 (code=exited, status=1/FAILURE)
      CPU: 86ms

Jan 24 12:09:20 myServer systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 24 12:09:20 myServer systemd[1]: Failed to start Docker Application Container Engine.
Jan 24 12:09:20 myServer systemd[1]: docker.service: Unit entered failed state.
Jan 24 12:09:20 myServer systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 24 12:09:21 myServer systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jan 24 12:09:21 myServer systemd[1]: Stopped Docker Application Container Engine.
Jan 24 12:09:21 myServer systemd[1]: docker.service: Start request repeated too quickly.
Jan 24 12:09:21 myServer systemd[1]: Failed to start Docker Application Container Engine.
Jan 24 12:09:21 myServer systemd[1]: docker.service: Unit entered failed state.
Jan 24 12:09:21 myServer systemd[1]: docker.service: Failed with result 'exit-code'.
journalctl -u docker --since '2018-01-24 12:09'
Jan 24 12:09:17 myServer systemd[1]: Starting Docker Application Container Engine...
Jan 24 12:09:17 myServer dockerd[4176]: time="2018-01-24T12:09:17.260224422+01:00" level=info msg="libcontainerd: new containerd process, pid: 4183"
Jan 24 12:09:18 myServer dockerd[4176]: time="2018-01-24T12:09:18.264676211+01:00" level=warning msg="failed to rename /var/lib/docker/tmp for background deletion: %!s(<nil>). Deleting synchr
Jan 24 12:09:18 myServer dockerd[4176]: time="2018-01-24T12:09:18.278198186+01:00" level=error msg="[graphdriver] prior storage driver aufs failed: driver not supported"
Jan 24 12:09:18 myServer dockerd[4176]: Error starting daemon: error initializing graphdriver: driver not supported
Jan 24 12:09:18 myServer systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 24 12:09:18 myServer systemd[1]: Failed to start Docker Application Container Engine.
Jan 24 12:09:18 myServer systemd[1]: docker.service: Unit entered failed state.
Jan 24 12:09:18 myServer systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 24 12:09:18 myServer systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jan 24 12:09:18 myServer systemd[1]: Stopped Docker Application Container Engine.
 same block repeated 3 times, then :
Jan 24 12:09:21 myServer systemd[1]: docker.service: Start request repeated too quickly.
Jan 24 12:09:21 myServer systemd[1]: Failed to start Docker Application Container Engine.
Jan 24 12:09:21 myServer systemd[1]: docker.service: Unit entered failed state.
Jan 24 12:09:21 myServer systemd[1]: docker.service: Failed with result 'exit-code'.

Details :

Looks like a Debian bug (source) :

Unfortunately, this is because as soon as the kernel went to 4.0, overlayfs was merged upstream, and the Debian kernel team dropped the AUFS patches, so 4.0+ kernels in Debian no longer support AUFS and there's not a lot we can do to change or fix that (nor much upstream can do), and probably not much they should do, arguably (since the AUFS patches aren't exactly a source of warm fuzzies with upstream kernel developers). Even migration of the data would be nearly impossible without access to a kernel that did have AUFS long enough to transfer the data into a different storage driver.

Solution :

  1. Add to /etc/docker/daemon.json (create it if missing) (source (with obsolete method), up-to-date method) :
    {
        "storage-driver": "overlay"
    }
  2. date; systemctl start docker.service
    date will help filter logs hereafter
    Wed Jan 24 12:46:58 CET 2018
  3. see also
mail

How to backup + restore all Docker data ?

Situation :

Docker is running fine, except that _AFTER_ putting it in production (i.e. once there's data on it), we've detected that the LVM configuration has problems (something Red Hat-specific related to Docker and its storage driver). We had other hosts configured the same (wrong) way, but since these had no Docker data, we've been able to simply delete + re-create the storage with the right settings.
As for hosts already having Docker data, there is no simple "export + restore" functionality, hence this procedure.
  • Docker newbie here so there _may_ be errors in this procedure or in the way I use some commands
  • Some settings / usages _may_ be specific to my company / tech. leader "suggestions" (I'm open to comments, use the envelope icon above )
  • The steps describing how to properly configure the storage to make Docker happy are not covered here.

Solution :

Preliminary steps :

  1. If running a virtual machine, take a snapshot, just in case...
  2. Fix NFS issues :
    Not enough local storage, so I wanted to copy data to a network share, but had some difficulties :

Backup data :

Steps below are run as root.

  1. declare some shell variables once for all (so that you'll be able to copy-paste further steps as-is ) :

    nfsServer='10.27.25.5'; nfsExportPath="/filer/projects/$HOSTNAME"; nfsMountPoint='/mnt/nfs'; dockerDataDir='/var/lib/docker'; dockerDataDir_backupDir="$nfsMountPoint/varLibDocker"; backupDirVolumes='backupDockerVolumes'; volumesToBackup='volume1Name;volume1Dir volume2Name;volume2Dir volume3Name;volume3Dir'; backupDirContainers='backupDockerContainers'; containerData='container1Name;container1ServiceName container2Name;container2ServiceName'

    • volumesToBackup and containerData are actually shell tuples
    • what volumenDir relates to is not (yet) clear to me. Looks like it's something "inside" the volume itself, or some kind of mount point...
    • containernServiceName will be used to build commands such as : systemctl start containernServiceName
  2. declare a utility function :

    getContainerIdByName() { imageName=$1; docker ps -a | awk -v needle="$imageName" '$0 ~ needle {print $1}'; }

  3. make a backup of dockerDataDir (/var/lib/docker) via rsync :

    systemctl stop docker.service; mkdir -p "$dockerDataDir_backupDir"; umount "$nfsMountPoint"; mount -t nfs -o v3,async "$nfsServer:$nfsExportPath" "$nfsMountPoint" && time rsync -avz --delete "$dockerDataDir/" "$dockerDataDir_backupDir"; systemctl start docker.service

    This step provides double security for the data (snapshot + file copy), but is not mandatory. It can take a long time, depending on the amount of data and the network/disks performance.
    Restarting Docker causes some files to change (containers/<hash>/..., devicemapper/devicemapper/data/...) and rsync will need a looong time to resynchronize files again. Don't run this command twice, it will only stress disks and network.
  4. backup Docker volumes :

    cd "$nfsMountPoint"; mkdir -p "$backupDirVolumes"; for tuple in $volumesToBackup; do volumeName=$(echo $tuple | cut -d ';' -f 1); volumeDir=$(echo $tuple | cut -d ';' -f 2); archiveName="$backupDirVolumes/$volumeName.tar"; echo -e "\n######## WORKING ON '$volumeName' '$volumeDir' ########"; docker run -it --rm -v "$volumeName":"$volumeDir" -v "$PWD/$backupDirVolumes":"/$backupDirVolumes" alpine tar -cf "$archiveName" "$volumeDir"; ls -lh "$archiveName"; done

  5. backup containers (images, actually) :

    absolutePathToBackupDir="$nfsMountPoint/$backupDirContainers"; mkdir -p "$absolutePathToBackupDir"; for tuple in $containerData; do containerName=$(echo $tuple | cut -d ';' -f 1); containerId=$(getContainerIdByName "$containerName"); echo -e "\n######## WORKING ON '$containerName' ########"; imageName="$containerName:$(date +%F_%H-%M-%S)"; archiveName="$absolutePathToBackupDir/$containerName.tar"; echo ' stopping...'; docker stop "$containerId"; echo ' committing...'; docker commit "$containerId" "$imageName"; echo ' saving...'; docker save "$containerName" -o "$archiveName"; ls -lh "$archiveName"; done

Configure the storage :

Not described here, but we did it with Ansible.

Restore the data :

  • Steps below are run as root.
  • If you left the shell session where you defined the variables before the backup, load those variables again.
  1. re-create the Docker volumes (more on the 'tuple' hack) :

    for tuple in $volumesToBackup; do volumeName=$(echo $tuple | cut -d ';' -f 1); echo -e "\nCreating volume '$volumeName' : "; docker volume create "$volumeName" && echo 'OK' || echo 'KO'; done

  2. extract the Docker volume archives we created earlier :

    cd "$nfsMountPoint"; for tuple in $volumesToBackup; do volumeName=$(echo $tuple | cut -d ';' -f 1); volumeDir=$(echo $tuple | cut -d ';' -f 2); echo -e "\n######## RESTORING '$volumeName' '$volumeDir' ########"; archiveName="$backupDirVolumes/$volumeName.tar"; docker run -it --rm -v "$volumeName":"$volumeDir" -v $PWD"/$backupDirVolumes":"/$backupDirVolumes" alpine tar -xf "$archiveName"; done

  3. restore containers (images, actually) :

    for tuple in $containerData; do containerName=$(echo $tuple | cut -d ';' -f 1); echo -e "\n######## RESTORING CONTAINER '$containerName' ########"; containerToRestore="$nfsMountPoint/$backupDirContainers/$containerName.tar"; [ -f "$containerToRestore" ] && docker load --input "$containerToRestore" || echo "file '$containerToRestore' not found"; done

  4. start the containers :

    for tuple in $containerData; do containerName=$(echo $tuple | cut -d ';' -f 1); containerServiceName=$(echo $tuple | cut -d ';' -f 2); echo -e "\nStarting container '$containerName' ('$containerServiceName') :"; systemctl start "$containerServiceName" && echo 'OK' || echo 'KO'; done

  5. Enjoy !!!
mail

Docker glossary

container (source)
  • runtime instance of an image (i.e. what the image becomes in memory when actually executed).
  • It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so via a Dockerfile.
  • Containers run apps natively on the host machine's kernel. They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor. Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.
  • Container != VM
Dockerfile (source, directives reference)
text file containing all the commands you would normally execute manually in order to build a Docker image :
  • what's inside / outside of the image
  • from what + how the image is built
  • how the app inside the image interacts with the rest of the world (i.e. access to resources such as network, storage, etc)
  • some environment variables
  • what to do when the image launches
  • ...
image
    An image is static and lives only on disk (source).
  • An image is an executable package that includes everything needed to run an application (source) :
    • the code of the application
    • a runtime
    • libraries
    • environment variables
    • config files
  • An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes.
Union file system (source)
  • Docker images are stored as series of read-only layers. When we start a container, Docker takes the read-only image and adds a read-write layer on top. If the running container modifies an existing file, the file is copied out of the underlying read-only layer and into the top-most read-write layer where the changes are applied. The version in the read-write layer hides the underlying file, but does not destroy it : it still exists in the underlying layer.
  • When a Docker container is deleted, relaunching the image will start a fresh container without any of the changes made in the previously running container. Those changes are lost.
  • Docker calls this combination of read-only layers with a read-write layer on top a Union File System. (more about union mount)
volume

Container != VM (source) :

The underlying kernel is shared by all containers on the same host. The container-specific stuff is implemented via kernel namespaces. Each container has its own :
  • process tree (pid namespace)
  • filesystem (mnt namespace)
  • network namespace
  • RAM allocation + CPU time
Everything else is shared with the host : in general the host machine sees / contains everything inside the container from file system to processes etc. You can issue a ps on the host and see processes running inside the container (source).
Docker containers are not VMs hence everything is actually running natively on the host and is using the host kernel directly. Each container has its own user namespace (like good ol' chroot jails). There are tools / features which make sure containers only see their own processes, have they own file-system layered onto host file system and a networking stack which pipes to the host networking stack.
mail

Docker random notes

About Docker itself :

  • Since March 2014, Docker does not rely anymore on LXC but on libcontainer as its default execution driver (source).
  • libcontainer wraps around existing Linux container APIs, particularly cgroups and namespaces.

Handle images :

List images :
  • docker images
  • docker image ls
Both commands look equivalent (source).
Delete the image imageId :
docker rmi imageId
Where are stored images on the filesystem of the machine running Docker ?
docker info | grep 'Docker Root Dir'
Docker Root Dir: /var/lib/docker
  • This is the default path.
  • There is not a lot —if anything— that can be done directly at the filesystem level with images, this is mostly FYI.
Full details about a given image (source) :
  1. docker images
    REPOSITORY		TAG		IMAGE ID		CREATED			SIZE
    whatever/myImage	latest		e00a21e210f9		22 months ago		19.2MB
  2. docker image inspect e00a21e210f9 | grep -E '(Lower|Merged|Upper|Work)Dir' | sed 's|/var|\n/var|g'
                    "LowerDir": "
    /var/lib/docker/overlay2/9ceb16316d05119812856edda1c772b3680ff20336859cb79e8d58df8abf787a/diff:
    /var/lib/docker/overlay2/ac535e1ef985bec3d5bc90a5124f4ca14a610b9f007966f7521496aa6b6866ac/diff:
    /var/lib/docker/overlay2/4eeff3b3fc7a14b197827ffae0cab33c0df6a15b08b2f45895c6e987a6f3013a/diff",
                    "MergedDir": "
    /var/lib/docker/overlay2/6923b6f343bd98e0a05826de6794dcb1756b4eb5fad9811fcad773e76b66a737/merged",
                    "UpperDir": "
    /var/lib/docker/overlay2/6923b6f343bd98e0a05826de6794dcb1756b4eb5fad9811fcad773e76b66a737/diff",
                    "WorkDir": "
    /var/lib/docker/overlay2/6923b6f343bd98e0a05826de6794dcb1756b4eb5fad9811fcad773e76b66a737/work"
  3. These are the different layers an image is made of :
    • LowerDir : read-only layers of the image
    • UpperDir : read-write layer that represents changes
    • MergedDir : result of LowerDir + UpperDir that is used by Docker to run the container
    • WorkDir : internal directory used by the overlay2 storage driver, should be empty

Handle containers :

List running containers :
docker ps
List all containers (running + non-running) :
docker ps -a
Delete the container containerId :
docker rm containerId

Miscellaneous :

Start the Docker daemon (see also) :
systemctl start docker.service
Get system-wide information :
docker info
Main configuration file (source) :
/etc/docker/daemon.json, but since options can be passed from the command line, this file looks optional.