thoughts.sort()

Deploying Dockerized app with Docker Compose

September 05, 2020

Tags: docker, docker compose

When developing a Docker app, Docker Compose is a useful tool for taking care of the hassle of orchestrating the different micro-services. Here I outline the basics of using Docker Compose to develop the app as a bound collection of Docker services, and how to take this collection and deploy it to a remote Docker host.

Docker compose makes it easy to write a configuration for orchestration. Once this orchestration file has been created, it can be used to package up sets of virtually bundled docker containers, which can be launched with a single command. When this orchestration has been written, it would be nice to also be able to use it to manage production deployments. But up until now, external tools like Docker Machine were needed to do this. More recently, however, further development on Docker Compose has resulted in an integration of much of the Docker Machine functionality into Docker Compose. We’ll look at how to use that functionality below.

First up, though, we have to create the compose-file docker-compose.yml, which sits at the root of the app directory:

services:
  app:
    container_name: app
    ports:
      - 5000:5000
    environment:
      PYTHONUNBUFFERED: 1
    build:
      context: ./app
      dockerfile: Dockerfile

Here, we describe a package that contain a single service called app.

  • The first attribute is called container_name which is the internal name that we use to refer to individual containers inside a virtual Docker network.
  • Below that we have the ports attribute which defines the port mapping for that particular container.
  • The environment section lists the environment variables that are going to be set in the container (PYTHONBUFFERED: 1 is a nice one to have if we are running multiple containers in parallel).
  • Finally, the build section describes where to find the Docker-file that used to launch the container.

The point of Docker Compose is that you can orchestrate several containers from a single compose-file. This way they can be launched and managed like a single application, instead of spinning up and shutting down each micro-service individually. Here, we abstract away from secondary containers to avoid this turning into a code dump. But the number of containers launched with a single compose file does not otherwise make any difference to the Docker CLI.

When the compose file has been created, launching the app for debugging is as simple as:

$ docker-compose up

This single command takes care of all the setup like launching the individual containers and routing between them, so we can focus on coding the application. In this particular case, my web-app will be available on localhost port 5000, courtesy of the port mapping above.

In order to take this a step further we need to delineate our environments. For this purpose, the Docker CLI includes Docker Context which is a module that allows a single Docker CLI to manage multiple Docker nodes or clusters. When running docker-compose up above, we launched the app locally, exposing the endpoint on localhost:5000. This makes development a breeze, but Docker Contexts super charges this functionality by allowing os to launch apps on a remote node instead of localhost.

A Docker context needs to be setup, and some form of authorization is required before this will actually work. But before we talk about that, I want to show how easy deployments go, once this is all setup. Basically, it’s a two-step process that consists of first changing the context and then deploying. Like this:

$ docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT   ORCHESTRATOR
default *           Current DOCKER_HOST based configuration    unix:///var/run/docker.sock                         swarm
remote                                                        ssh://root@167.71.37.221
$ docker context use remote
remote
Current context is now "remote"
$ docker-compose up
Starting app           ... done
Attaching to app
app       | Running on http://0.0.0.0:5000 (CTRL + C to quit)
app       | [2020-09-05 14:21:47,004] Running on 0.0.0.0:5000 over http (CTRL + C to quit)
app       | INFO:quart.serving:Running on 0.0.0.0:5000 over http (CTRL + C to quit)
app       | [2020-09-05 14:21:56,448] xxx.xxx.xxx.xxx:xxxxx GET /index 1.1 200 612 18871
app       | INFO:quart.serving:xxx.xxx.xxx.xxx:xxxxx GET /index 1.1 200 612 18871

The logging of app is being passed on to the local terminal, and as we can see, Quart (a Python server) still thinks its running on 0.0.0.0:5000. Which, of course, it is, if only virtually. Thanks to the port mapping from the compose file, if we now visit the url of our remote host on port 5000, the request will be passed through to port 5000 on the container.

When deploying, it might make more sense to launch the container in detached mode. This can be done by adding the -d flag:

$ docker context use remote
remote
Current context is now "remote"
$ docker-compose up -d

Taking a step back. Before being able to switch contexts like this, the contexts themselves need to be created. This requires a Docker host to be running remotely, and this remote host needs to be authenticated in some way. I prefer having ssh keys on set up, so I don’t need typing my password every time I deploy. We’ll look at that in another article, for now we’ll assume these credentials have already been authenticated.

To create a remote docker context, get the IP of the remote server, and run the command below:

$ docker context create remote --docker "host=ssh://username@$REMOTEIP"

The Docker context is a general concept, that is not only useful with regards to Docker Compose. It is also possible to run other Docker commands directly on a remote server, like this:

$ docker --context=remote ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                    NAMES
e72f8c46601a        dockerdo_worker      "python worker.py"       About an hour ago   Up 26 minutes       0.0.0.0:5001->5000/tcp   worker
d5015561e41f        dockerdo_api         "python app.py"          About an hour ago   Up 26 minutes       0.0.0.0:80->5000/tcp     app
a4335460e5b2        redis:5.0.6-alpine   "docker-entrypoint.s…"   About an hour ago   Up 26 minutes       0.0.0.0:6379->6379/tcp   redisinstance

One very important caveat (as of the time of writing) is that you need to make sure that the ssh configuration on your remote VM allows for a larger MaxSessions than the default. According to this Github issue, this is a temporary fix for allowing Docker Compose to do its job:

  • Remote into the VM
  • Open the file /etc/ssh/sshd_config on the host
  • Find the line that starts with #MaxSessions. Remove the pound character and set it to MaxSessions 500.
  • Finally run: sudo service ssh restart

Sources: