Continuous Blog Delivery Part 1

There's no such thing as over-engineering.

Posted on 10 February 2017

This blog was started two years ago and during that time it has been running on the exact same configuration: Nginx serving the static content running directly on a TransIP VPS. While the updating of the blog itself is really straight-forward (SSH into the machine and run an update script) it is now 2017 and having to do anything manually is of course completely unacceptable! In this series I am going to document process of setting up a true CD workflow for my blog using Docker, Traefik and Jenkins.

In this first episode I will show you how to get an Nginx instance running behind a Traefik reverse proxy, all inside docker containers. In the next episodes I will show you how to add Jenkins in a docker container to the set-up and automatically trigger builds. The second, third and fourth episodes in this series are also online.

Introduction

So let’s start with a plan. What are the requirements? Well first of all I want to have everything on my host running in docker containers. This way I can create, update or destroy something much easier than I did before. Secondly I want builds to be automatically triggered whenever I push an update to bitbucket. I also want to be able to easily deploy 'test' projects on my system and have them available via a sub domain. Oh, and having it all behind HTTPS is a must as well as soon Google will penalize sites with only HTTP and mark them as not secure.

So the steps will be:

  • Dockerize my blog

  • Set up the Traefik reverse proxy as a docker container

  • Have it forward traffic to my blog’s container

  • Move this setup to a fresh VPS

  • Set up Jenkins so it can build and deploy docker containers

  • Trigger builds from Bitbucket pipelines

  • Add a build for a Java service and have it served under a sub-domain

  • Configure Traefik for LetsEncrypt

That should keep me busy for a while!

Dockerizing My Blog

So to be able to deploy my blog I will need to bake a Docker container out of it first and foremost. For now I will create the container image manually and later I will automate this process. To be able to test the steps it is convenient to have a ready made working image of my blog to run as a docker container.

My blog is generated from AsciiDoc using JBake. Baking the blog is done by simply running jbake and it will output the static content to the output directory. Packaging this into a runnable docker container is simple with a ready made nginx base image. So I create a Dockerfile in the root of my blog project:

FROM nginx
COPY output /usr/share/nginx/html

And then I run docker build -t nielsutrecht/app-blog:latest . in that directory. Docker reports that it has built the image and now I can start it with docker run --name app-blog -d -p 80:80 nielsutrecht/app-blog exposing the container internal port externally on port 80. Browsing to http://localhost shows the index page for my blog. Neat!

Now I also want to push it to the cloud so I can easily get it onto my VPS later:

docker push nielsutrecht/app-blog:latest
Note
If you are getting an authorization error make sure you created the repository on docker.io and are logged in (using docker login).

Now I’m going to stop it again since we will be deploying it in a slightly different manner via docker-compose next:

docker stop app-blog

Traefik Reverse Proxy

I don’t use my VPS just for my blog though; it gets used for a lot of experiments and temporary projects, most of which are Java based REST services with a simple AngularJS front-end. Currently I’m using a single Nginx for both serving static content as well as acting as a reverse proxy for these services. This means that for everyone of these services I have to manually create a config file, create a symlink for it and then reload Nginx. Much too cumbersome! So can we automate this?

Of course we can! I recently stumbled upon Traefik. It is "a modern HTTP reverse proxy and load balancer made to deploy microservices with ease". It can can automatically pick-up Docker (or Kubernetes) deployments and create proxy rules for them. Awesome! It can also automatically refresh LetsEncrypt certificates; something I want to add in the near future.

I originally started with the tutorial on katacoda.com but unfortunately it seems to be rather outdated. The examples here follow a somewhat different format of using separate docker-compose files for different parts of the application and also use a newer version (2) of the compose files. Since that has my preference I’m going to follow that format. So let’s start with a docker-compose.yml file for Traefik itself:

Setting up Traefik

version: '2'

services:
  proxy:
    image: traefik
    command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
    networks:
      - webgateway
    ports:
      - "80:80"
      - "8080:8080"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /dev/null:/traefik.toml
    restart: always

networks:
  webgateway:
    driver: bridge
Note
The docker.sock mount is how Traefik connects to Docker and sees new or changed deployments.

We can now start Traefik itself with docker-compose up -d:

$ docker-compose up -d
Creating network "traefik_webgateway" with driver "bridge"
Creating traefik_proxy_1

On http://localhost:8080/dashboard/ we can see the Traefik dashboard. It currently should show only the Traefik front-end and back-end.

Note
In Traefik term a 'front-end' is the load balancer that received traffic based on certain rules (like Host: niels.nu) and then forwards it to one or more 'back-ends'; docker containers handling requests.

If I curl localhost I get a 404 response because we don’t have anything serving content yet.

$ curl http://localhost
404 page not found

Testing and scaling

My next step is to test the creation and scaling of services. To do that I’m going to create a new docker-compose.yml file in a separate "whoami" folder. Here I’m going to create a deployment for a simple echo server that just responds with the header and its hostname:

version: '2'

services:
  whoami:
    image: emilevauge/whoami
    networks:
      - web
    labels:
      - "traefik.backend=whoami"
      - "traefik.frontend.rule=Host:whoami.docker.localhost"

networks:
  web:
    external:
      name: traefik_webgateway

Again I’m going to start it with docker-compose:

$ docker-compose up -d
Creating whoami_whoami_1

In the Traefik dashboard I also now automatically see this service pop up. Great! So what happens is that Traefix is keeping an eye on the Docker daemon and whenever it sees a container started with the correct "traefix.*" labels it interprets them as containers that need to be added to the list of applications it needs to proxy. The "Host: " rule instructs that the whoami back-end should receive all requests for the 'whoami.docker.localhost' host. So let’s see if we can talk to it:

$ curl -H Host:whoami.docker.localhost http://127.0.0.1
Hostname: c09333ce5532
IP: 127.0.0.1

Awesome! It works! Currently it only has one container though; lets scale it up to two with docker-compose scale:

$ docker-compose scale whoami=2
Creating and starting whoami_whoami_2 ... done

If we now call the servie twice it will round-robin over the two 'whoami' back-ends:

$ curl -H Host:whoami.docker.localhost http://127.0.0.1
Hostname: 1cb20925032b

$ curl -H Host:whoami.docker.localhost http://127.0.0.1
Hostname: c09333ce5532
Tip
Instead of using curl you can also do this from your browser by using for example Requestly to spoof your Host header!

Adding my blog

To add my blog as a service I need to create another docker-compose.yml file:

version: '2'

services:
  blog:
    image: nielsutrecht/app-blog
    networks:
      - web
    labels:
      - "traefik.backend=blog"
      - "traefik.frontend.rule=Host: localhost, niels.nu, www.niels.nu, nibado.com, www.nibado.com"
      - "traefik.port=80"
    restart: always

networks:
  web:
    external:
      name: traefik_webgateway

It’s almost identical to the whoami compose file with the exception that we’re using a different name, image and have a different Hosts rule. We also need to instruct Traefik to have it connect to port 80: Nginx will listen on both the HTTP and HTTPS ports and Traefik will favour the highest port.

Note
Host:{containerName}.{domain} is the default rule, so you can’t just leave it out!

Lets start it:

$ docker-compose up -d
Creating blog_blog_1

When I now curl -H Host:niels.nu http://127.0.0.1 I see my blog homepage. Neat! We can even scale it up if we want:

$ docker-compose scale blog=2
Creating and starting blog_blog_2 ... done

A new home

We have the configuration for the proxy and the blog image ready, now it’s time to move it to the web! I ordered a new VPS with Transip which gets delivered immediately. I’ve briefly considered (and used) CoreOS but it seems to favor set-up where you use an external Network Attached Storage for persistent data. So I’ve opted to again go for good ol' trusted Ubuntu 16 LTS.

Warning
Ubuntu does not come with docker pre-installed. Make sure you use this guide to install docker-compose, APT installs an old version that doesn’t understand version 2 docker-compose.yml files!

Deploying containers

So I’ve copied the Traefik docker-compose.yml file over to my VPS so I can now start it:

$ cd traefik/
$ docker-compose up -d
Creating traefik_proxy_1

curl on localhost:8080 indeed shows it running and if I go to http://vps2.niels.nu:8080/ I get the Traefik dashboard. Neat!

Lets now grab my blog as well:

$ cd blog/
$ docker-compose up -d
Creating blog_blog_1

Visiting http://vps2.niels.nu I immediately see my blog up and running. Also comparing it to the loading speeds on http://niels.nu (the current version) it looks like both are exactly the same speed. By far the slowest to respond is the external Google Analytics script which should not hinder the user anyway.

Conclusion

This concludes the first part of this series. I’ve completed the first four goals and now have my blog up and running, fully dockerized, on a new host! In the next episode I am going to automate the builds and deployments with Jenkins and Bitbucket pipelines.