Docker - Build, Ship, and Run any App, Anywhere




This article is about docker-compose CLI flags. For details / context / examples, see my Docker Compose article.


Flag Usage
-d --detach run container in the background and give the shell prompt back

Where is stored the Docker container data on the host machine ?

  1. get the container ID :
    docker ps
    c031860c837e	sonatype/nexus:2.14.4-03	
  2. inspect it :
    docker inspect c031860c837e
    (returns a wealth of JSON data)
  3. for details about the host-side storage, search these keys :
    • MergedDir
    • LowerDir
    • UpperDir

docker inspect c031860c837e | grep -E '(Merged|Lower|Upper)Dir' | sed -r -e 's/[a-f0-9]{64}/__ID__/g' -e 's|:/|\n\t     /|g' -e 's/^ {16}//'
This command gets information about the keys listed above and formats the results :
  • replaces the 64-character container IDs with __ID__
  • displays multiples values of LowerDir one per line
  • suppresses leading spaces and indent for readability
"LowerDir": "/var/lib/docker/overlay2/__ID__-init/diff
"MergedDir": "/var/lib/docker/overlay2/__ID__/merged",
"UpperDir": "/var/lib/docker/overlay2/__ID__/diff",

Let's clarify bind mounts, volumes, --mount and --volume

Table of contents

Overview (source)

  • Docker containers are used to run applications in isolated environments. By default, all the changes inside the container are lost when the container stops. If we want to keep data between runs, Docker bind mounts and volumes can help.
  • Attaching a volume to a container creates a long-term connection between the container and the volume. Even when the container has exited, the relationship still exists (details).

What are these : bind mounts, volumes, --mount and --volume ?

  • The Docker documentation is particularly misleading, as it introduces bind mounts and volumes notions as well as --mount and --volume flags, one after the other, without clearly defining them ().
  • IT professionals often use the term volume to refer both to volumes and to bind mounts, which makes things even harder for newbies
So let's sort this out :
bind mount
  • a file or directory of the host is mounted into a container
  • relies on host's filesystem + directory structure : containers are supposed to be environment-independent : get the image, start it, enjoy
  • are sometimes called Host volumes
  • a volume is a directory that is
    • within Docker's storage directory on the host machine (typically /var/lib/docker/volumes)
    • created by and whose contents is managed by Docker
  • there are different kinds of volumes :
    • anonymous volumes :
      • if you don't care / need to control how a volume is named, let Docker handle that for you too. It will get a 64-character long hexadecimal ID
      • after docker-compose down && docker-compose up -d a new empty anonymous volume gets attached to the container. The old one doesn't disappear : Docker doesn't delete volumes unless you tell it to. (source)
      • even though volumes were introduced as storage for data to persist between runs, anonymous volumes are not suited for this purpose.
    • named volumes :
      • have a name that suggests which project / application they belong to
      • after docker-compose down && docker-compose up -d, you'll get the volume that was attached to the container before docker-compose down (source)
flags : --mount and --volume
  • do not assume the 1st is for bind mounts and the 2nd is for volumes : both flags apply to both uses, in different contexts
  • long story short : both flags can do similar things, but with a different syntax

What do bind mounts and volumes have in common ?

  • both are designed to keep data between runs
  • both "connect" something outside the container with something inside it
    • this "something" can be a directory or a single file
  • both end up as data written to the host's filesystem
    • the "where" and "how" are part of their differences (read below)
  • both can use the --mount and --volume
  • both can be declared
    • from the CLI
    • in a Dockerfile
    • or in a docker-compose.yml file

What differentiates bind mounts and volumes ?

The main difference is the "outside" part stated above : the storage on the host side :
  • with bind mounts : that "outside" thing is a directory or file that already exists and which I (or other programs) already interact with (examples : /etc/whatever/thing.conf or /data/)
  • with volumes :
    • Docker manages most of this (create volume, read/write data, delete volume) automatically and you needn't worrying about it
      • depending on options, there may be leftover volumes using storage space, so regular wipes may be necessary
    • volumes are stored locally on the host filesystem in /var/lib/docker/volumes

What's the difference between --mount and --volume ?

  • The documentation says : As opposed to bind mounts, all options for volumes are available for both --mount and -v flags., which translates to :
    • all options for volumes are available for both --mount and -v flags : you may define a volume using either --mount or --volume, you'll have access to the same options
    • As opposed to bind mounts, : it's the contrary for bind mounts : "not all options are available for both --mount and -v flags" (... and they let us guess ??? )
  • --mount is more verbose (multiple key-value pairs) and explicit while --volume combines all the options together in one field
  • The documentation favors none over the other. As far as I can tell, most examples found on the internet use --volume.

How to declare a bind mount ? A volume ?

bind mount volume
anonymous named
-v /path/on/host:/path/in/container
If the 1st parameter has one or more / (e.g. /path/on/host) :
  • it's a path, and the -v statement is about a bind mount
  • otherwise, it's a volume name
-v path/in/container
-v volumeName:path/in/container
  • volumeName must be unique within host
  • there is a 3rd parameter for "options", allowing to declare a read-only volume (:ro) and that defaults to :rw, which is why it's frequently omitted
Dockerfile You can't (details) :
  • containers instantiated from the same image would (try to) have concurrent access to the same resources (denied by design)
  • it would be a security issue if any Docker image, once instantiated, had read/write access to the host filesystem
VOLUME path/in/container
You can't. Otherwise, 2 containers instantiated from the same image would try to create 2 volumes with the same name, which is not possible (details : 1, 2).
version: '3.7'

      - ./db:/var/lib/mysql

      - ./web:/var/www/html
version: '3.1'


    image: drupal:8.2-apache
      - 8080:80
version: "3"

    image: db
    volumes:			#1 use the named volume
      - myvol:/var/lib/db	myvol created with #3

    image: backup-service
    volumes:			#2 use the named volume
      - myvol:/var/lib/backup	myvol created with #3

volumes:			#3 create the named
  myvol:			volume myvol
In this example, the database's data directory is shared with another service as a volume so that it can be periodically backed up.
(sources : 1, 2)

docker-compose.yml directives

configuration options that are applied at build time
version: '3.8'
      context: './path/to/myService-7.3.1/'
    image: myService:7.3.1
other example
When using both build and image: imageName:imageTag, imageName:imageTag refers to the image built by Compose, not to the base image.
env_file (details : 1, 2)
  • in docker-compose.yml :
        stop_grace_period: '5m'
          - '9207:9200'
          - 'elastic_data:/usr/share/elasticsearch/data:rw'
          - './logs/elastic:/usr/share/elasticsearch/logs:rw'
          - './env/elastic.env'
  • in ./env/elastic.env :
    The expected format is : NAME=value.'talend7pp'
    _JAVA_OPTIONS='-Xmx4g -Xms4g'
  • which gives a file tree like :
    . project root
    Whatever the location, environment variables will be loaded when running docker-compose up (details).

    difference between .env and env/serviceName.env files (source) :

    • .env :
      • default file to store environment variables + values
      • found in the same directory than docker-compose.yml
    • env/serviceName.env :
      • path must be specified :
        • in docker-compose.yml : with env_file
        • in CLI : with --env-file
      • path + file name env/serviceName.env is purely conventional
      • allows storing the environment file anywhere and naming it freely
  • dictionary format :
      RACK_ENV: development
      SHOW: 'true'		quotes are necessary around boolean-likes in YAML files because of this and that
  • array format :
      - RACK_ENV=development
      - SHOW=true
expose ports :
  - "hostPort:containerPort"
  • short syntax :
    • source :
      • volume name
      • path on the host side
        • absolute path : (explicit)
        • relative path : relative to the directory containing the docker-compose.yml file
    • target : path in the container (i.e. mount point)
    • mode :
      • ro
      • rw (default)
  • long syntax
  • driver : you may also see code like :
        driver: local
    • local :
      • means the corresponding volume will be created on the same host where the container is running. This is the default setting and can be omitted in most cases (sources : 1, 2).
      • accepts options similar to the mount command when running on GNU/Linux (details)

Dockerfile directives

Table of contents

ADD [--chown=kevin:developers] source destination
  • copy source into the filesystem of the image at the destination path
  • destination can either be :
    • an absolute path
    • a path relative to WORKDIR
  • About .zip files (sources : 1, 2 ) :
    • looks like there is no —and never will be— a solution to copy+extract .zip files with ADD, like it actually works for .tar.gz & al
    • workarounds :
      • unpack + repack the .zip archive into .tar.gz format, then ADD it
      • COPY + RUN unzip + RUN rm
  • ADD or COPY ?
  • What's the difference between ARG and ENV-defined variables ?
  • defines a variable that users can pass at build-time (example) :
    • Dockerfile :
      ARG myVariable
    • build command :
      docker build --build-arg myVariable=value
  • it is possible to define an optional default value, which will be used when no value is provided at build-time :
  • the variable name is often written uppercase, but I've found no rule enforcing this.
  • the defined variable lives (details) :
    • from the line it is defined (it can override an existing variable)
    • until the end of the build stage (end of Dockerfile or subsequent FROM)
  • since every build stage :
    • starts with FROM
    • ends with
      • the next FROM (if any)
      • the end of the Dockerfile
    ARG must not appear before FROM
  • the value of a variable can be used with the $ prefix (like shell variables, source) :
    ARG myVariable=42
    RUN echo $myVariable
sets the command to be executed when running a container from an image
When (trying to) COPY files into an existing directory :
COPY someFile /path/to/destinationDir
  • will not result in /path/to/destinationDir/someFile
  • will result in /path/to/destinationDir being a regular file : this is someFile copied there + renamed into destinationDir
Workaround :
COPY someFile /path/to/destinationDir/
define what command gets executed when running a container
The difference between ARG and ENV-defined variables is that (source) :
  • ARG is for build-time variables. These are used during the docker build phase —i.e. building the image— and won't last after it.
  • whereas ENV is for run-time variable. These are for variables that have to exist while a container runs.
  • informs Docker that the container listens on the specified network ports at runtime. More explicitly, EXPOSE portNumber states that something inside the container is listening on port portNumber.
  • How this internal port will be plugged to the external world will be defined when actually running the container, with docker run flags (details) :
  • you can specify whether the port listens on TCP (default) or UDP
  • EXPOSE :
    • does not enable communication on the specified port(s) (aka it does not publish the port(s))
    • is just a kind of "documentation" between the person who builds the image and the person who runs the container about which ports are intended to be published
  • initializes a new build stage on top of baseImage for extra instructions
  • ARG is the only instruction that may precede FROM (Understand how ARG and FROM interact)
  • AS newName can be specified to refer to this new build stage in a subsequent FROM
  • examples :
    • FROM baseImage [AS newName]
    • FROM baseImage:tag [AS newName]
sets the user name (or UID) and optionally the user group (or GID) to use
  • when running the image
  • and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile
  • there are 2 syntaxes :
    • "string" syntax :
      • VOLUME /data
      • VOLUME /volume1 /volume2
    • YAML "array" syntax :
      • VOLUME [ "/data" ]
      • VOLUME [ "/volume1", "/volume2" ]
  • whatever syntax is used, specifying 2 directories must not be mistaken with some kind of "mount" operation : VOLUME /dir1 /dir2 :
    • does not "mount" /dir1 into /dir2 (or vice-versa)
    • is just a list of 2 directories (inside the container) to be used as 2 anonymous volumes
  • paths specified here are all inside the container
  • since WORKDIR defaults to /, VOLUME /data and VOLUME data may/may not refer to the same thing depending on circumstances
  • sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile
  • If the WORKDIR doesn't exist, it will be created even if it's not used in any subsequent Dockerfile instruction.

Docker and AppArmor

Looks like both would fit pretty well with each other. So what's the problem ?

Containers are NOT VMs : the kernel is shared between the container and its host.
This means :
  • kernel options enabled on the host are/may be enabled on the container as well
  • you can not enable a kernel option in a container if it's not already enabled in the host
Generally speaking, fiddling with kernel options in the context of containers goes beyond what containers are for.

Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration.

Setup :

  1. make sure you have installed the Docker engine
  2. As root :
    • find the latest Docker Compose version number
    • edit and run :
      dockerComposeVersion='1.27.4'; localDockerComposeBinary='/usr/local/bin/docker-compose'; curl -L "$dockerComposeVersion/docker-compose-$(uname -s)-$(uname -m)" -o "$localDockerComposeBinary" && chmod +x "$localDockerComposeBinary"; curl -L$dockerComposeVersion/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
    • if you plan to run Docker Compose as a non-root user :
      chmod 755 "$localDockerComposeBinary"
  3. check installation :
    docker-compose --version
    docker-compose version 1.27.4, build 40524192

Let's build things with it (source) :

Target : have 4 containers (based on this) : 2 web and 2 DB. We'll use these with Ansible later.

Our working environment (source) :

workDir='./dockerCompose'; mkdir -p "$workDir" && cd "$workDir"

The Dockerfile : ./workDir/Dockerfile (source) :

We'll actually re-use this one.

The Compose file : ./workDir/docker-compose.yml (source) :

version: '3.7'

    hostname: web1
    build: .				this service uses the image built from the Dockerfile found in this directory (details)
    container_name: c_ubuntu_web1
    tty: true

    hostname: web2
    build: .
    container_name: c_ubuntu_web2
    tty: true

    hostname: db1
    build: .
    container_name: c_ubuntu_db1
    tty: true

    hostname: db2
    build: .
    container_name: c_ubuntu_db2
    tty: true

Build and run (source) :

docker-compose up
These 11 steps are actually repeated for each configured service :
Building ubuntu_db2
Step 1/11 : FROM ubuntu:16.04
 ---> 5e13f8dd4c1a
Step 2/11 : ARG userName='ansible'
 ---> Using cache
 ---> 7f163145edfd
Step 3/11 : ARG homeDir="/home/$userName"
 ---> Using cache
 ---> 9025b899c1df
Step 4/11 : ARG sshDir="$homeDir/.ssh"
 ---> Using cache
 ---> b32d4b7d1abf
Step 5/11 : ARG authorizedKeysFile="$sshDir/authorized_keys"
 ---> Using cache
 ---> 8477ab5bcac4
Step 6/11 : ARG publicSshKey='./'
 ---> Using cache
 ---> 5b2235eb9fbf
Step 7/11 : RUN apt-get update &&       apt-get install -y iproute2 iputils-ping openssh-server &&      apt-get clean &&        useradd -d "$homeDir" -s /bin/bash -m "$userName" &&    mkdir -p "$sshDir"
 ---> Using cache
 ---> 41cfeaaa7dda
Step 8/11 : COPY "$publicSshKey" "$authorizedKeysFile"		this step doesn't like it when source files are symlinks
 ---> Using cache
 ---> 3dcab2010a2b
Step 9/11 : RUN chown -R "$userName":"$userName" "$sshDir" &&   chmod 700 "$sshDir" &&  chmod 600 "$authorizedKeysFile"
 ---> Using cache
 ---> 1f511354cd72
Step 10/11 : EXPOSE 22
 ---> Using cache
 ---> deb5199624a4
Step 11/11 : CMD [ "sh", "-c", "service ssh start; bash"]
 ---> Using cache
 ---> 3bb266505c4c

Successfully built 3bb266505c4c
Successfully tagged dockercompose_ubuntu_db2:latest
WARNING: Image for service ubuntu_db2 was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up --build.
Then :
Creating c_ubuntu_web1 ... done
Creating c_ubuntu_db1  ... done
Creating c_ubuntu_web2 ... done
Creating c_ubuntu_db2  ... done
Attaching to c_ubuntu_db1, c_ubuntu_web1, c_ubuntu_web2, c_ubuntu_db2
c_ubuntu_db1   |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
c_ubuntu_web1  |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
c_ubuntu_web2  |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
c_ubuntu_db2   |  * Starting OpenBSD Secure Shell server sshd            [ OK ]
(No prompt given at that time, so Ctrl-z + bg to get it back)
docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS               NAMES
76f413236d70        dockercompose_ubuntu_db2    "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_db2
240e1505313e        dockercompose_ubuntu_web2   "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_web2
b5b6887d2b02        dockercompose_ubuntu_db1    "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_db1
95f988a02ff2        dockercompose_ubuntu_web1   "sh -c 'service ssh …"   3 minutes ago      Up 3 minutes       22/tcp              c_ubuntu_web1
docker image ls
REPOSITORY                  TAG                 IMAGE ID            CREATED            SIZE
dockercompose_ubuntu_db1    latest              3bb266505c4c        4 minutes ago      205MB
dockercompose_ubuntu_db2    latest              3bb266505c4c        4 minutes ago      205MB
dockercompose_ubuntu_web1   latest              3bb266505c4c        4 minutes ago      205MB
dockercompose_ubuntu_web2   latest              3bb266505c4c        4 minutes ago      205MB
Maybe it was not necessary to build 4 distinct images that are actually identical...

Further steps :

  • To stop them all :
    docker-compose stop
  • To remove all containers and all images (source) :
    docker rm $(docker ps -a -q) && docker rmi $(docker images -q) --force
    --force will remove everything, including images that may not be related with your current work.
  • You may now alter the Dockerfile or the Compose file and rebuild :
    docker-compose up --build
  • To start everything again AND still get the shell prompt back :
    docker-compose up -d
    Creating c_ubuntu_db1  ... done
    Creating c_ubuntu_web1 ... done
    Creating c_ubuntu_web2 ... done
    Creating c_ubuntu_db2  ... done

Now let's use this "virtualized infrastructure" :


Install Docker CE (Community Edition) on Debian

Procedure (source) :

As root :
  1. apt update && apt upgrade
  2. The official install procedure lists packages that are dummy / transitional. The Debian packages site suggests replacements for some of these, as shown in the command below.
  3. Add Docker's official GPG key:
    curl -fsSL | apt-key add -
    Check :
    apt-key fingerprint 0EBFCD88
    pub   rsa4096 2017-02-22 [SCEA]
          9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
    uid           [ unknown] Docker Release (CE deb) <>
    sub   rsa4096 2017-02-22 [S]
  4. Add the stable repository :
    add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
  5. install Docker Engine :
    apt update && apt install docker-ce docker-ce-cli
  6. At this step :
    • Docker Engine is installed and running.
    • The docker group is created but still empty.
    • You'll need to use sudo to run Docker commands.
    • You may configure this (and other things) right now by jumping to the postinstall steps or continue and check your installation below.
  7. check installation :
    docker run hello-world
    • if you get this error message :
      docker: Error response from daemon: Get proxyconnect tcp: dial tcp connect: connection refused
      read this article
    • otherwise, you should get :
      Unable to find image 'hello-world:latest' locally		you can safely ignore this line since it's the 1st launch
      latest: Pulling from library/hello-world
      0e03bdcc26d7: Pull complete
      Digest: sha256:e7c70bb24b462baa86c102610182e3efcb12a04854e8c582838d92970a09f323
      Status: Downloaded newer image for hello-world:latest
      Hello from Docker!
      This message shows that your installation appears to be working correctly.		
      To generate this message, Docker took the following steps:
       1. The Docker client contacted the Docker daemon.
       2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
       3. The Docker daemon created a new container from that image which runs the
          executable that produces the output you are currently reading.
       4. The Docker daemon streamed that output to the Docker client, which sent it
          to your terminal.

Post-install steps :

  • below is the For the impatients procedure, see the source documentation for details
  • don't forget to set the nonRootUser value below
  1. As root :
    nonRootUser='bob'; dockerGroupName='docker'; getent group "$dockerGroupName" || groupadd "$dockerGroupName"; usermod -aG "$dockerGroupName" "$nonRootUser"; echo "User '$nonRootUser' must log out+in for changes to take effect."
  2. as the non-root user defined above, check installation :
    docker run hello-world
    You should get the "Hello from Docker" message.

Docker glossary

container (source)
  • runtime instance of an image (i.e. what the image becomes in memory when actually executed).
  • It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so via a Dockerfile.
  • Containers run apps natively on the host machine's kernel. They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor. Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.
  • Container != VM
Dockerfile (source, directives reference)
text file containing all the commands you would normally execute manually in order to build a Docker image :
  • what's inside / outside of the image
  • from what + how the image is built
  • how the app inside the image interacts with the rest of the world (i.e. access to resources such as network, storage, etc)
  • some environment variables
  • what to do when the image launches
  • ...
    An image is static and lives only on disk (source).
  • An image is an executable package that includes everything needed to run an application (source) :
  • An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes.
Union file system (source)
  • Docker images are stored as series of read-only layers. When we start a container, Docker takes the read-only image and adds a read-write layer on top. If the running container modifies an existing file, the file is copied out of the underlying read-only layer and into the top-most read-write layer where the changes are applied. The version in the read-write layer hides the underlying file, but does not destroy it : it still exists in the underlying layer.
  • When a Docker container is deleted, relaunching the image will start a fresh container without any of the changes made in the previously running container. Those changes are lost.
  • Docker calls this combination of read-only layers with a read-write layer on top a Union File System. (more about union mount)
  • a special directory within one or more containers that bypasses the Union File System
  • a volume :
    • is designed to persist data, independent of the container's life cycle
    • is a directory of the host filesystem that is mounted into a container
    • may be anonymous or named
  • since :
    • the host path may not exist on all systems
    • Docker images are expected to be portable
    the host path for a volume cannot be specified from a Dockerfile. Instead, it can be specified :
    • as a CLI argument when playing with Docker alone
    • within docker-compose.yml otherwise
  • context and keywords :
  • More explanations :
  • Let's play with volumes :
    • List the running containers :
      docker ps
      f64391d2ae1b	atlassian_confluence	"./"	5 days ago	Up 4 days>8090-8091/tcp	atlassian_confluence_1
    • Get details about their volumes :
      docker inspect -f "" atlassian_confluence_1 | grep -A7 '"Type": "volume"'
      	"Type": "volume",
      	"Name": "atlassian_confluence_tmp",
      	"Source": "/var/lib/docker/volumes/atlassian_confluence_tmp/_data",
      	"Destination": "/tmp",
      	"Driver": "local",
      	"Mode": "rw",
      	"RW": true,
      	"Propagation": ""
      	"Type": "volume",
      	"Name": "atlassian_confluence_data",
      	"Source": "/var/lib/docker/volumes/atlassian_confluence_data/_data",	the Source is inside the container (see Mountpoint below)
      	"Destination": "/var/atlassian/application-data/confluence",		the Destination is on the host
      	"Driver": "local",
      	"Mode": "rw",
      	"RW": true,
      	"Propagation": ""
    • More details about a single volume :
      docker volume inspect atlassian_confluence_data
      		"CreatedAt": "2020-10-08T23:09:51+02:00",
      		"Driver": "local",
      		"Labels": {
      			"com.docker.compose.project": "atlassian",
      			"com.docker.compose.version": "1.21.2",
      			"com.docker.compose.volume": "confluence_data"
      		"Mountpoint": "/var/lib/docker/volumes/atlassian_confluence_data/_data",
      		"Name": "atlassian_confluence_data",
      		"Options": null,
      		"Scope": "local"

Container != VM (source) :

The underlying kernel is shared by all containers on the same host. The container-specific stuff is implemented via kernel namespaces. Each container has its own :
  • process tree (pid namespace)
  • filesystem (mnt namespace)
  • network namespace
  • RAM allocation + CPU time
Everything else is shared with the host : in general the host machine sees / contains everything inside the container from file system to processes etc. You can issue a ps on the host and see processes running inside the container (source).
Docker containers are not VMs hence everything is actually running natively on the host and is using the host kernel directly. Each container has its own user namespace (like good ol' chroot jails). There are tools / features which make sure containers only see their own processes, have they own file-system layered onto host file system and a networking stack which pipes to the host networking stack.

Docker random notes

What's the purpose / benefit of using Docker ?

  • Docker allows running —on the same host— applications that have concurrent / conflicting requirements
    • if you have application_A (which requires lib_x in version 1) and application_B (which requires the same lib_x in version 2), you may run each application in its own container with its own independent set of libs
    • same goes on with applications that require conflicting libs : application_A requires lib_x while application_B requires lib_y, and lib_x and lib_y can't be installed on the same host : running each app in its own container solves the problem
  • you can run anywhere an application in the same environment than in production :
    • OS, libs, 3rd-party software,
    • this is convenient for dev + test + debug
  • makes it easier to :
    • deploy :
      • since you've already tested your development on the production environment
    • rollback :
      • basically just a git revert + deploy

About Docker itself :

  • Since March 2014, Docker does not rely anymore on LXC but on libcontainer as its default execution driver (source).
  • libcontainer wraps around existing Linux container APIs, particularly cgroups and namespaces.

Handle images :

List images :
Both commands look equivalent (source).
Delete the image imageId :
docker rmi imageId
Where are stored images on the filesystem of the machine running Docker ?
docker info | grep 'Docker Root Dir'
Docker Root Dir: /var/lib/docker
  • This is the default path.
  • There is not a lot —if anything— that can be done directly at the filesystem level with images, this is mostly FYI.
Full details about a given image (source) :
  1. docker images
    whatever/myImage	latest		e00a21e210f9		22 months ago		19.2MB
  2. docker image inspect e00a21e210f9 | grep -E '(Lower|Merged|Upper|Work)Dir' | sed 's|/var|\n/var|g'
    			"LowerDir": "
    			"MergedDir": "
    			"UpperDir": "
    			"WorkDir": "
  3. These are the different layers an image is made of :
    • LowerDir : read-only layers of the image
    • UpperDir : read-write layer that represents changes
    • MergedDir : result of LowerDir + UpperDir that is used by Docker to run the container
    • WorkDir : internal directory used by the overlay2 storage driver, should be empty

Handle containers :

List running containers :
docker ps
List all containers (running + non-running) :
docker ps -a
Delete the container containerId :
docker rm containerId

Miscellaneous :

Start the Docker daemon (see also) :
systemctl start docker.service
Get system-wide information :
docker info
Main configuration file (source) :
/etc/docker/daemon.json, but since options can be passed from the command line, this file looks optional.