Surviving Docker: Packaging penguins into boxes

Published on 2021-03-11

Table of Contents

Docker is typically known as the tool for managing and running containers in Linux.

However, a lot of people seem to have misunderstood what it is used for and why it has seemingly taken center stage in the Linux world.

For those coming from the world of BSD, Solaris, or OpenVZ might see the word ‘container’ and get confused once they try Docker. It looks very familiar but somewhat foreign at the same time.

For those who are creating web applications might have issues with the development to deployment cycle and would like to have a more straight forward method.

So, what is Docker exactly, what problem does it solve, and how can it help you?

Difficulty of delivering software

The problem which Docker solves since the beginning of the project is a rather tricky one: How to deliver software and have it running on production?

The problem might seem easy: you can just compile the software, package the binary, and just install it on the server.

Easy, right?

But in reality, there’s always this lingering problem: How do you make sure that everything works?

Here’s a common scenario:

  • The server runs RHEL.
  • RHEL ships version 1.0 of a library.
  • Meanwhile Sam who’s using Arch Linux has version 1.5 installed on his machine.
  • Turns out there’s a change between 1.0 and 1.5 that caused a nasty, yet minute bug.

This is the infamous “it works on my machine” issue. Typically, those kind of bugs are very hard to find as the developer cannot reproduce them on their machines locally and only see it happening once it is deployed.

The typical solution was to use virtual machines. Instead of just delivering packages or binaries, why not use virtual machines images like an appliance?

The virtual machine image will be pre-configured with everything you need to run the software: the Linux distribution, the libraries and the software itself.

The server operator only needs to run the virtual machine on the server with a couple configurations and the app will simply run.

And now the developers can test by building the virtual machine image on their machines and run it.

Solved, right? Well…

Janice is using a laptop running Fedora and she can’t run virtual machines because she doesn’t have a lot of RAM, and the server operator found out that the staging server is incapable of running virtual machines for the same reason.

Damn, well, we can always just ask Sam to do final tests on his machine locally but that’s looking bad on the bus factor.

However, all developers should be able to test their changes locally on their machines to determine if everything is running correctly.

Containers as a medium to deliver items

Shipping companies solve this issue by using ISO containers. ISO containers are fixed and standardized units. The standard itself defines the size of these containers, the maximum weight, and many other intricate details.

As the sender, All you need to do is to put your stuff inside of it and ask the shipping company to deliver it to the receiver.

For the shipping company, they really don’t have to care about what’s inside of it. As long as you put it in a standard ISO container, they’ll be able to ship it on boats, trucks, and cargo planes.

These boats, trucks and cargo planes are constructed to support these standard containers and are designed around it. They typically have bays or racks for it and have a known maximum capacity based on the amount of containers it can carry.

Thanks to the ISO container revolution, The entire shipping infrastructure has improved and been simplified drastically as it only has to worry about the delivery of the ISO container, not the contents of it.

So, why don’t we have something like containers to deliver software?

The container can simply contain a lightweight Linux installation, the required libraries along with the software itself.

And we can easily run it on all the developers machines and servers without having to worry about which flavour of Linux they’re running.

Docker images as a medium to deliver software

Docker images contain the software preinstalled with the libraries which that software requires.

With the image, you can launch a container which will run the image and the software within it.

With Docker installed, You can try it now:

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:308866a43596e83578c7dfa15e27a73011bdd402185a84c5cd7f32a88b501a24
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

You just launched a Docker container with the hello-world Docker image which executes the hello-world binary.

If you want to explore what the environment within the container is like, you can do the following:

$ docker run -it --rm ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
5d3b2c2d21bb: Pull complete
3fc2062ea667: Pull complete
75adf526d75b: Pull complete
Digest: sha256:b4f9e18267eb98998f6130342baacaeb9553f136142d40959a1b46d6401f0f2b
Status: Downloaded newer image for ubuntu:latest
root@c7556396d607:/#

Notes:

  • The -i flag indicates interactive. In simple terms, this will allow you to send inputs to the running programs.

  • The -t flag indicates that a TTY needs to be allocated for the container.

    • If you don’t understand what “TTY” means, don’t worry, it is simply required by a lot of interactive command line applications. In practice, every time you run a container with interactivity, you always do -it together.
    • If you really want to know what this “TTY” business is about, see virtual console.
  • The --rm flag indicates that the container should be removed once it’s stopped.

Docker in practice

Before we dig any deeper, let’s make a simple web application and make a Docker image for it.

This application will use Python and the Flask micro web framework.

First, make an empty folder and place app.py with the following content:

import socket
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello():
    return f'Hello from {socket.gethostname()}!'

Don’t forget the requirements.txt which lists the required dependencies. Place this in the same folder as app.py.

flask==1.1.2

You can try running the application locally with flask run (assuming you’ve installed the dependencies on your machine!) but it’s not that important.

Docker images are created with the help of Dockerfile. Dockerfiles are simply recipes to build the Docker image.

Here’s the Dockerfile for our small project:

# Use python:3 as the base of our image
FROM python:3

# Expose port 5000, used by flask as a port to listen
EXPOSE 5000

# Create an APP_DIR environment variable
ENV APP_DIR=/opt/app

# Set our current working directory to $APP_DIR
WORKDIR $APP_DIR

# Copy only requirements.txt and install the dependencies
# This is a common trick used to save space!
COPY requirements.txt $APP_DIR
RUN pip install -r requirements.txt

# Copy the entire project
COPY . $APP_DIR

# Execute `flask run --host=0.0.0.0` when you run the container
CMD ["flask", "run", "--host=0.0.0.0"]

You can see that the image is based of an existing Docker image which has Python installed.

There are also plenty of other “base” images for various languages/frameworks, such as node, go, rust and also for Linux distributions as well, like debian, ubuntu, opensuse/leap.

After that, we set the working directory to /opt/app where our application will live, install the dependencies with pip, copy the project to the image and set a command to run.

Before we continue, Here’s the directory tree to ensure that everything is setup correctly.

my_first_docker
├── app.py
├── Dockerfile
└── requirements.txt

Now we can just run docker build to build the image.

$ docker build -t my_first_docker .
Sending build context to Docker daemon  4.608kB
Step 1/8 : FROM python:3
3: Pulling from library/python
0ecb575e629c: Already exists
7467d1831b69: Already exists
feab2c490a3c: Already exists
f15a0f46f8c3: Already exists
937782447ff6: Already exists
e78b7aaaab2c: Already exists
b68a1c52a41c: Pull complete
ddcd772f47ec: Pull complete
aef84dafa567: Pull complete
Digest: sha256:2c9e0841ab570f51f28891513c4d9b02e13954fa2453df909e0a6bbfbbaaaad3
Status: Downloaded newer image for python:3
 ---> 254d4a8a8f31
Step 2/8 : EXPOSE 5000
 ---> Running in 8bed4e787093
Removing intermediate container 8bed4e787093
 ---> 160558bc30bc
Step 3/8 : ENV APP_DIR=/opt/app
 ---> Running in fa0520760ded
Removing intermediate container fa0520760ded
 ---> 44e688dfb794
Step 4/8 : WORKDIR $APP_DIR
 ---> Running in 9e9b1f7ffd1a
Removing intermediate container 9e9b1f7ffd1a
 ---> 54658db77948
Step 5/8 : COPY requirements.txt $APP_DIR
 ---> 5a16c297f492
Step 6/8 : RUN pip install -r requirements.txt
 ---> Running in f4820ea014d0
Collecting flask==1.1.2
  Downloading Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting Jinja2>=2.10.1
  Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)
Collecting Werkzeug>=0.15
  Downloading Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
Collecting itsdangerous>=0.24
  Downloading itsdangerous-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting click>=5.1
  Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting MarkupSafe>=0.23
  Downloading MarkupSafe-1.1.1-cp39-cp39-manylinux2010_x86_64.whl (32 kB)
Installing collected packages: MarkupSafe, Werkzeug, Jinja2, itsdangerous, click, flask
Successfully installed Jinja2-2.11.3 MarkupSafe-1.1.1 Werkzeug-1.0.1 click-7.1.2 flask-1.1.2 itsdangerous-1.1.0
Removing intermediate container f4820ea014d0
 ---> d704783885b1
Step 7/8 : COPY . $APP_DIR
 ---> c6a6b299293a
Step 8/8 : CMD ["flask", "run", "--host=0.0.0.0"]
 ---> Running in 299dd52ed30a
Removing intermediate container 299dd52ed30a
 ---> 4fa03655dab1
Successfully built 4fa03655dab1
Successfully tagged my_first_docker:latest

Notes:

  • The -t flag signifies the image tag we want, essentially a name for the image we’re creating.
  • The . just says to build the image using the files of the current working directory.

You can see that it runs each step of the Dockerfile inside intermediate containers. In each step, it performs the action in an intermediate container which will create an image layer that will be used for the next step. This cycle continues until the last step, whose layer will be used as the final image. After the image is built, it will be tagged when asked.

You can confirm this by running docker history:

$ docker history my_first_docker
IMAGE          CREATED         CREATED BY                           SIZE
4fa03655dab1   2 minutes ago   #(nop)  CMD ["flask" "run" "--hos…   0B
c6a6b299293a   2 minutes ago   #(nop) COPY dir:81a839249fd7a77dd…   673B
d704783885b1   2 minutes ago   pip install -r requirements.txt      9.83MB
5a16c297f492   2 minutes ago   #(nop) COPY file:118b963683835467…   13B
54658db77948   2 minutes ago   #(nop) WORKDIR /opt/app              0B
44e688dfb794   2 minutes ago   #(nop)  ENV APP_DIR=/opt/app         0B
160558bc30bc   2 minutes ago   #(nop)  EXPOSE 5000                  0B
254d4a8a8f31   2 weeks ago     #(nop)  CMD ["python3"]              0B
<missing>      2 weeks ago     set -ex;   wget -O get-pip.py "$P…   8.01MB
<missing>      2 weeks ago     #(nop)  ENV PYTHON_GET_PIP_SHA256…   0B
<missing>      2 weeks ago     #(nop)  ENV PYTHON_GET_PIP_URL=ht…   0B
<missing>      2 weeks ago     #(nop)  ENV PYTHON_PIP_VERSION=21…   0B
<missing>      2 weeks ago     cd /usr/local/bin  && ln -s idle3…   32B
<missing>      2 weeks ago     set -ex   && wget -O python.tar.x…   55.8MB
<missing>      2 weeks ago     #(nop)  ENV PYTHON_VERSION=3.9.2     0B
<missing>      4 weeks ago     #(nop)  ENV GPG_KEY=E3FF2839C048B…   0B
<missing>      4 weeks ago     apt-get update && apt-get install…   18MB
<missing>      4 weeks ago     #(nop)  ENV LANG=C.UTF-8             0B
<missing>      4 weeks ago     #(nop)  ENV PATH=/usr/local/bin:/…   0B
<missing>      4 weeks ago     set -ex;  apt-get update;  apt-ge…   510MB
<missing>      4 weeks ago     apt-get update && apt-get install…   146MB
<missing>      4 weeks ago     set -ex;  if ! command -v gpg > /…   17.5MB
<missing>      4 weeks ago     set -eux;  apt-get update;  apt-g…   16.5MB
<missing>      4 weeks ago     #(nop)  CMD ["bash"]                 0B
<missing>      4 weeks ago     #(nop) ADD file:8f75f11b2bd2d50e5…   114MB

You can see how much space each layer takes by looking at the size. This is how Docker saves on storage space, if a layer already exists on the machine, it is simply reused.

This means if you already pulled python:3 and want to pull another Docker image which uses python:3 as a base, it will only pull the extra layers introduced by the new image instead of everything.

The layers that are indicated as <missing> are simply the layers from the python:3 image we used as the base. You can also see how python:3 is made and notice that it uses ubuntu as the base.

Now, let’s run our image as a container:

$ docker run -p 8000:5000 my_first_docker
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

This will run the my_first_docker image that you’ve created and performs port forwarding from port 8000 of your machine to port 5000 of the container.

Now, you should be able to visit the running website by going to https://127.0.0.1:8000 and see Hello from <container hash>! on your web browser.

On another terminal, you can see the running Docker containers with docker ps.

$ docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED         STATUS         PORTS                    NAMES
6ded79639a17   my_first_docker   "flask run --host=0.…"   5 seconds ago   Up 4 seconds   0.0.0.0:8000->5000/tcp   nice_bassi

From here, you can see essential info such as container ID, image, status, ports and names.

If a container name is not given, Docker will generate a name typically composed of two random words.

You can also see that Docker performs port forwarding from 0.0.0.0:8000 to port 5000 of the container.

What if you want to run something in the running container, like a shell to debug something. You can use docker exec to do this.

$ docker exec -it nice_bassi bash
root@6ded79639a17:/opt/app# ls
Dockerfile  __pycache__  app.py  requirements.txt
root@6ded79639a17:/opt/app#

To stop the container, you can use docker stop or press CTRL-C on the terminal with the container running.

$ docker stop nice_bassi
nice_bassi

If you check docker ps now, you might see that the list is empty, since docker ps doesn’t include stopped containers by default. To do that you need to add the -a flag which means “all”.

$ docker ps -a
CONTAINER ID   IMAGE             COMMAND                  CREATED         STATUS                     PORTS     NAMES
6ded79639a17   my_first_docker   "flask run --host=0.…"   6 minutes ago   Exited (0) 3 seconds ago             nice_bassi

Now you can see that it is stopped. When stopped, the container is still resident on storage, you can start it again with docker start and check with docker ps to see that the container is running.

To actually remove the container, you need to use docker rm. However, you need to stop it first.

$ docker rm nice_bassi
nice_bassi

You can export your created image to a tar archive with docker save.

$ docker save -o exported_my_first_docker.tar my_first_docker
$ ls -lh exported_my_first_docker.tar
-rw------- 1 yuki yuki 877M Mar 11 19:00 exported_my_first_docker.tar

However, the common way to deliver Docker images is by using registries. There are various online registry services such as Docker Hub, Quay, GitHub Packages, and GitLab Registry.

This is an example of me pushing the image to my personal Docker registry.

$ # First, I need to login to my registry
$ docker login registry.yukiisbo.red
Username: yuki
Password:

Login Succeeded
$ # We need to tag the image to match the repository in the registry
$ docker image tag my_first_docker registry.yukiisbo.red/yuki/my_first_docker
$ # Now we can push it to the registry :-)
$ docker push registry.yukiisbo.red/yuki/my_first_docker

Congratulations, you have learned the basics of Docker and most of the commands that you will use!

In practice, the entire image creation process is automated using continuous integration. See Use Docker to build Docker images from GitLab for an example.

Another helpful command that I’ll introduce to you is docker inspect. docker inspect, as the name implies, is used to inspect stuff in Docker. This means you can use it to inspect images, containers, networks and everything else.

Try running docker inspect my_first_docker to inspect the image or inspect the container.

Stateless Containers

Docker containers are stateless. This means that all data created/modified in the container will be deleted and aren’t permanent. This is what stateless means and all Docker containers are expected to be stateless.

In comparison to stateful where all of the data are expected to be saved and permanent (persistent). With Docker containers, this is not the case, new data written or modified will be wiped clean once the container is stopped/removed, it shouldn’t be expected that the data will be safe.

You might ask: “Well, how do I use Docker to run databases?”

The answer is to this problem by using Docker volumes or bind-mounts.

Storing stateful data with bind-mounts.

Bind mounts allows “binding” a part of the host filesystem to the container’s filesystem. In simple terms, you give the container access to a folder on the host machine.

If you’ve used “shared folders” in virtual machines, you can think of them as effectively the same thing.

$ mkdir white_hole
$ docker run -it --rm -v ./white_hole:/opt/black_hole ubuntu bash

You use the -v flag to perform a bind mount, the example above will bind the white_hole folder created on host to /opt/black_hole in the container.

Once you stop the container, you could see that all of your data is safe as it is stored on the white_hole folder instead of the container.

Storing stateful data with Docker volumes

Docker volumes is similar to bind-mounts but the folder is managed by Docker.

$ docker volume create another_white_hole
$ docker run -it --rm -v another_white_hole:/opt/black_hole ubuntu bash

You could see that it’s very similar to bind mounts but instead of creating a folder in the filesystem, we use Docker to create the folder.

In my experience, this is preferred to standard bind mounts because of it’s lower maintenance overhead, plus everything would be handled through docker.

To remove the volume, it’s a simple docker volume rm command away:

$ docker volume rm another_white_hole
another_white_hole

If you want to manipulate the contents (files, permissions, etc), you would need to launch a container which mounts the volume.

The general rule of thumb that I have is to use Docker volumes whenever possible unless I actually need to use bind mount (reasons such as using another drive, programs on host which requires manipulating the files, etc).

Container networking

We can now run containers and store data persistently but of course, all modern server software require other software to work (i.e Wordpress requires a database to work).

You might think to create a mega container with an init system which launches a web server, a database server, a SSH server, etc but this is not how you should use Docker.

This is the major difference in methodology when comparing against other container technologies where they’re used to separate different environments for resource isolation or security.

In Docker, containers are used as a platform to deliver and run applications. While it is possible to use Docker containers for resource isolation and security, it is not the main issue which it aims to solve.

So, instead of the traditional approach, in Docker, you typically run one software on one container.

Let’s run Wordpress as an example.

To run Wordpress, it requires a MySQL/MariaDB database server and a place to store data for both the database server and Wordpress itself.

So, here’s the bill of material:

  1. A container which runs Wordpress.
  2. Another container which runs a MySQL/MariaDB database server.
  3. A volume for the database to store the data.
  4. Another volume for Wordpress to store media.

Here, we’re missing a way for both containers to communicate with each other. In order to do just that, we will be creating a Docker network, specifically bridge network.

Docker network by default are bridge networks which behaves similar to a router with your containers connected to it.

There are other types of Docker networks (typically referred to as drivers) such as host, overlay, etc but we’ll only focus on bridge in this article.

Now, let’s add a network to our bill of materials.

  1. A container which runs Wordpress.
  2. Another container which runs a MySQL/MariaDB database server.
  3. A volume for the database to store the data.
  4. Another volume for Wordpress to store media.
  5. A network for the containers to use for communication.

Perfect, let’s create the network and volumes.

$ docker network create cool_wordpress
cool_wordpress
$ docker volume create cool_wordpress_db_data
cool_wordpress_db_data
$ docker volume create cool_wordpress_site_data
cool_wordpress_site_data

Awesome, let’s start our database server. We’ll be using MariaDB which already has an official image on Docker Hub: https://hub.docker.com/_/mariadb.

Looking at the docs in the Docker Hub page, we need to declare settings via environment variables.

$ docker run -e MYSQL_DATABASE=wordpress \
           -e MYSQL_USER=wordpress \
           -e MYSQL_PASSWORD=wordpress \
           -e MYSQL_RANDOM_ROOT_PASSWORD=yes \
           --network cool_wordpress \
           -v cool_wordpress_db_data:/var/lib/mysql \
           --name cool_wordpress_db \
           -d \
           mariadb

Notes:

  • The -e flags are used to define environment variables in the following form VARIABLE_NAME=VALUE.
  • The --network flag is used to define which network the container should attach to.
  • The --name flag is used to name the container instead of using a randomly generated name.
  • The -d flag is used to run the container in the background instead of “attaching” to it.

You can check whether the database is running with docker logs.

$ docker logs cool_wordpress_db
...
2021-03-15 16:23:08 0 [Note] Reading of all Master_info entries succeeded
2021-03-15 16:23:08 0 [Note] Added new Master_info '' to hash table
2021-03-15 16:23:08 0 [Note] mysqld: ready for connections.
Version: '10.5.9-MariaDB-1:10.5.9+maria~focal'  socket: '/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution

You can add the -f flag if you want to “follow” the logs instead of exiting. If it’s too long, you can also add --tail=10 to only show the last 10 lines.

Wordpress also has an official image on Docker Hub: https://hub.docker.com/_/wordpress

Lets start it up!

$ docker run -e WORDPRESS_DB_HOST=cool_wordpress_db \
           -e WORDPRESS_DB_USER=wordpress \
           -e WORDPRESS_DB_PASSWORD=wordpress \
           -e WORDPRESS_DB_NAME=wordpress \
           --network cool_wordpress \
           -v cool_wordpress_site_data:/var/www/html \
           -p 8081:80 \
           --name cool_wordpress_app \
           -d \
           wordpress

Important thing to note is that other containers can be reached via its name (see WORDPRESS_DB_HOST) as long they’re within the same network.

Wordpress should now be running on port 8081: http://127.0.0.1:8081 and you should be greeted with Wordpress’ setup wizard.

Congratulations, you just launched an entire website with a database and stateful storage with Docker.

I’ve said that other container technologies use containers to have separate environments.

In Docker, we achieve a similar effect with Docker networks to isolate different resources. With Docker networks, you can group containers together within the same network so they’re able to reach other containers as resources (i.e database servers, object storage servers, etc).

A common use for this is having different networks for different environment when developing an application. In this case, you would have three networks for development, staging, and production.

Another common use is to isolate different “services” such as your Wordpress website, your Nextcloud server, and your Minecraft server. You don’t want your Wordpress website nor Minecraft server to communicate with the database server of your Nextcloud. So, you isolate them by having three networks: one for Wordpress, another one for Nextcloud, and the last one for Minecraft.

Orchestrating Docker containers

At this point, you might find the entire docker volume create, docker network create and docker run process to be very tedious and annoying. Not to mention you have to delete, stop, restart one-by-one.

Well, there’s a tool called Docker compose which aims to make it less annoying by using YAML files to declare a “stack”.

You can think a stack as a set of containers, volumes, and networks that are required to run your application.

Reusing our Wordpress example, this is what a docker-compose.yml file looks like:

version: "3"

services:
  db:
    image: mariadb
    environment:
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=wordpress
      - MYSQL_RANDOM_ROOT_PASSWORD=yes
    volumes:
      - db_data:/var/lib/mysql
  app:
    image: wordpress
    ports:
      - 8082:80
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=wordpress
      - WORDPRESS_DB_NAME=wordpress
    volumes:
      - site_data:/var/www/html

volumes:
  db_data: {}
  site_data: {}

Simply place the example above inside a folder named awesome_wordpress as a file named docker-compose.yml.

To launch the site, you run docker-compose up

$ docker-compose up -d
Creating network "awesome_wordpress_default" with the default driver
Creating volume "awesome_wordpress_db_data" with default driver
Creating volume "awesome_wordpress_site_data" with default driver
Creating awesome_wordpress_app_1 ... done
Creating awesome_wordpress_db_1  ... done

The -d flag is used here so it doesn’t attach my terminal to the containers logs. You can view the logs with docker-compose logs.

You should be able to access the site on port 8082: http://127.0.0.1:8082.

To stop the site, you use docker-compose stop

$ docker-compose stop
Stopping awesome_wordpress_db_1  ... done
Stopping awesome_wordpress_app_1 ... done

To remove the containers and network, you use docker-compose down

$ docker-compose down
Removing awesome_wordpress_db_1  ... done
Removing awesome_wordpress_app_1 ... done
Removing network awesome_wordpress_default

If you want to remove the volumes as well, simply add the -v flag.

$ docker-compose down -v
Removing network awesome_wordpress_default
WARNING: Network awesome_wordpress_default not found.
Removing volume awesome_wordpress_db_data
Removing volume awesome_wordpress_site_data

There you go, much simpler, right?

There are a couple details that I’d like to point out quickly:

  • All resources (containers, volumes, and networks) have awesome_wordpress at the beginning of their names.

    • This prefix is called a project name and Docker compose use the folder name where docker-compose.yml resides by default.
  • Without an explicit declaration of the network, Docker compose creates a default network for you and attaches all of the container to the created network.

  • Both volume declaration are defined as {}. This is the defaults works fine for our needs, we don’t need any specific config.

    • You can leave them empty but I prefer having {} so it’ll be explicit to people reading the file.
  • You don’t need to use the entire container name (awesome_wordpress_db_1) when communicating with another container. You can use the service name instead.

    • This is possible as you can define aliases for the container within the network.

If you see the container names, they have _1 at the end (suffix). This is because Docker compose uses the notion of services instead of containers directly.

“Why is this important?”, you might ask.

This is because you can “scale” services, meaning launching multiple containers for a service. This is called “horizontal scaling” as you scale by having multiple instance of a component. Very helpful when you want to share the load across multiple web servers.

While I’m not certain how well Wordpress scales horizontally, we can do it with docker-compose scale.

$ docker-compose scale app=4

This will fail as port 8082 is used by the application, so further configuration is required (in this case, setting up a reverse proxy/load balancer) which we will not cover here.

While we’re here, lets say you want to change the port for Wordpress from 8082 to 80 (assuming port 80 is free).

Simply change 8082:80 to 80:80:

version: '3'

services:
  db:
    image: mariadb
    environment:
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=wordpress
      - MYSQL_RANDOM_ROOT_PASSWORD=yes
    volumes:
      - db_data:/var/lib/mysql
  app:
    image: wordpress
    ports:
      - 80:80
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=wordpress
      - WORDPRESS_DB_NAME=wordpress
    volumes:
      - site_data:/var/www/html

volumes:
  db_data: {}
  site_data: {}

Update it with docker-compose up.

$ docker-compose up -d
Recreating awesome_wordpress_app_1 ... done

You can see that it only updates app and leave db alone. Cool, right?

Congratulations, you just orchestrated an entire Wordpress website with the power of Docker compose!

You have seen two methods to deploy things with Docker: Docker compose and docker run.

Personally, I typically define stacks and deploy things using Docker compose and only use docker run when I need to make one-off containers for testing, debugging, etc.

I like Docker compose more mainly because it’s declarative as I simply need to define what I want instead of writing long commands. It’s just much simpler and fits the task of long running services.

Closing

This is the end of this article.

While I could go on about Docker’s internals among other things, I think it would be better for that to be a separate post or you can simply explore on your own, besides practice is the best teacher.

There are a couple of small things which I haven’t covered but I think it’s better for you to find those things on your own as you need them.

I recommend checking out Compose file version 3 reference for various other things that you can do with Docker.

Remarks

This is an addition I made on the 13th of August while writing an article on my series about Nix. I should’ve done it earlier but only realised it now.

I would like to thanks those who has reviewed this article and given me their input:

  • Lucy Firman (@lxcyp on GitHub)

If you’re one of those who has reviewed this article and would like to be credited, please let me know. :)