Introduction
Pretty isn’t it? Last week my birthday present arrived: a Raspberry Pi 3 with two Raspberry Pi Zero’s (1.3) and a ClusterHat. The Pi 3, ClusterHat and one Zero I got from the Pimoroni shop while the second Zero was supplied by The Pi Hut: both shops only allow one Zero per customer due to the huge demand. Fortunately one of my colleagues volunteered to also order two Zero’s for me, bringing the total to four!
So yes, that’s five computers. Why? To build myself a Cluster of course! Or in Raspberry terms; a Bramble! The ClusterHat shown acts like an USB hub that allows me to plug up to four zero’s into my Pi 3. With OTG Networking set up on the custom ClusterHat images the Zero’s are accessible over Ethernet (they get their own public IP) via de Pi 3’s ethernet connection.
It just looks awesome to have a Bramble of five Pi’s in such a small package! In this post I’ll show you how I set up Docker Swarm on the Bramble.
What didn’t work
So first of all; I got the stuff on Friday and it’s now Sunday and I’m writing this relatively short guide. So what happened in between? Well I mainly got stuck trying to get Kubernetes to work. Let me first say that I totally love Kubernetes; we use it at work and it’s awesome. When someone has set it up for you. Setting it up for yourself is quite a frustrating endeavour and while I haven’t given up on it I am first going to focus on Swarm.
So when you want to get Kubernetes working on ARM the easiest method is to use the Hypriot images. They come with Docker 1.12 installed and there are guides on how to set Kubernetes available. The problem is; I’m not using standard Ethernet connections; I use OTG. The Hypriot images seem to be missing 'stuff' (what I do not know) that lets me enable it.
The other route, getting it installed on the ClusterHat images (which are basically just pre-made Debian Jessie images) didn’t work either. I first got unlucky with running into this guide on how to install Docker. It’s complex and unnecessary, installing Docker is much easier now. Took me a while to figure that out though!
Note
|
You really should check out this blog by Alex Ellis! It has tons of up to date information! |
So while I get 'sort of' got Kubernetes installed Kube admin unfortunately seems to refuse to work with Docker 1.13. It also seems to 'need' things on my Pi that aren’t enabled on the ClusterHat image. Why Kube needs these while Docker Swarm doesn’t I do not know. Later I am going to attempt to install Kube manually instead of use Kube admin. If you don’t see a blog on it appear it didn’t work ;)
So enough for what I’m not going to do; let’s move on to the fun stuff!
ClusterHat
So to get stuff up and running you should first start by writing the ClusterHat images to your micro SD cards. The ClusterHat site has an excellent guide on this which I’m not going to repeat. I personally just downloaded the prefab images for the controller and zero 1-4. So the steps I took were:
-
Write images to Raspberry 3 / Zero’s
-
Write ssh file to Volume/boot (ssh will be disabled by default)
-
Boot up (Raspberry updates itself)
-
Log in with
ssh pi@<ip>
and passwordraspberry
-
Configure the Pi (
sudo raspi-config
) to: use entire disk, change password, change video memory to 16MB (default sets aside 64MB) and change hostname
Keep in mind that the ClusterHat by default has the Zero’s in the OFF state. This is to prevent them creating a dip in your power supply causing instability. So the above steps work for all the Pi’s but you’ll need to do the controller first to be able to enable and reach the Zero’s:
$ clusterhat on Turning on P1 Turning on P2 Turning on P3 Turning on P4
The Pi’s all acquire IP addresses via DHCP. I can easily see the IP assigned addressed in my router because it also shows the hostnames. If you don’t have this option you either need to guess or angrily scan for IPs. Also keep in mind that your router might reassign IPs. This is especially important later when you start working with Docker Swarm; a swarm Worker won’t be able to find the Master if the IP changed!
Docker
Note
|
Before you start, you might want to read 5 things about Docker on Raspberry Pi. |
Fortunately installing Docker is really easy now since Docker officially supports ARM. You can install it with a simple curl -sSL https://get.docker.com | sh
on your Raspberry. The process is the same for both the controller as well as the zero’s. The installation script will also tell you to run sudo usermod -aG docker pi
afterwards so that the 'pi' user can run docker without needing sudo
.
Note
|
This won’t take effect until you’ve logged in again |
docker version
should now show something like this:
Client: Version: 1.13.1 API version: 1.26 Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 07:25:30 2017 OS/Arch: linux/arm Server: Version: 1.13.1 API version: 1.26 (minimum version 1.12) Go version: go1.7.5 Git commit: 092cba3 Built: Wed Feb 8 07:25:30 2017 OS/Arch: linux/arm Experimental: false
Once you have installed Docker on your controller you will probably notice that you won’t be able to access your Zero’s anymore. Docker 1.13 sets up IPTables rules by default to make sure Docker containers aren’t reachable from the outside world as a security measure. sudo iptables -P FORWARD ACCEPT
by default sets the rule to accept instead of DROP
. This would be unwise in a production cluster but for a Bramble that’s mainly used for demo purposes this is fine. Keep in mind that this is done every time your Pi 3 boots!
Docker Swarm
Note
|
This is based on another wonderful blog post by Alex Ellis |
What’s really nice is that Docker 1.12 and up come with Swarm included. You don’t have to install anything else on your Raspberry; it’s already there. So first we’ll go and check if your Pi is already part of a swarm (it shouldn’t) by using docker info|grep Swarm
; this should report "Swarm: inactive". If it says 'active' it is already part of a Swarm and you should just type docker swarm leave
to remove it.
On our controller we are going to initialize swarm so we can let our workers connect to it. This is done with docker swarm init
. You are probably going to get a message that there are multiple IP addresses available so you need to tell it which IP address to advertise on with docker swarm init --advertise-addr <ip>
. After a while it should report that the Swarm is initialized and that you can connect workers, it will even print the command for you:
docker swarm join \ --token <random token docker generated> \ <controller ip>:<port>
Note
|
If you clear the message or later need the token again you can get it with docker swarm join-token worker on the manager.
|
So now ssh into your Zero’s and just copy-paste the command. Assuming you also installed Docker there correctly it should report after a while that it is connected to the Leader (the Pi 3). If I type docker node ls
it should report all the nodes as Ready after a while:
$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 08iueg6qv6h4964wf7ahvc4ov donny Ready Active 4ktue0dbx9quqjpb7sucskg1f raph Ready Active 6xb470pw6slr6maxzcusbsx7d * splinter Ready Active Leader f4n5p4b5omih0wvlm5qiuswrs leo Ready Active pvwt9odee16ctl2ui4fhiribo mikey Ready Active
I now have a Bramble with Docker Swarm with five nodes! So with all this 'massive' computing power at our disposal; what are we going to do next?
Deploying containers
I’m going to deploy the last example on this page on my setup. It shows a 'real' application with a database (Redis) that is shared by two simple web applications. In that example we first need to create a network overlay that is used between the web app and the Redis database. We can then create the Redis and web app services.
These will take quite a long time to start so don’t get worried if they seem 'stuck' for a while. Especially the first time you deploy something it will take quite a bit of time to download and unpack the Docker image. So we need to issue 3 commands:
$ docker network create --driver overlay --subnet 20.0.14.0/24 armnet $ docker service create --name redis --replicas=1 --network=armnet alexellis2/redis-arm:v6 $ docker service create --name counter --replicas=2 --network=armnet --publish 3000:3000 alexellis2/arm_redis_counter
In the counter service we use two replicas for now.
Warning
|
You need ARM specific images to deploy on your Raspberry. Deploying official non-ARM Docker images on your raspberry won’t work! |
After a while both services should be up and running:
$ docker service ls ID NAME MODE REPLICAS IMAGE 3k3rcebdxpxp counter replicated 2/2 alexellis2/arm_redis_counter:latest melg4lzwro15 redis replicated 1/1 alexellis2/redis-arm:v6
Docker has, as we instructed, created two instances of the 'counter' service (a Node.js web application) and one Redis instance. We can check which nodes a service is running on:
$ docker service ps counter ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS dmnw6re5h31v counter.1 alexellis2/arm_redis_counter:latest mikey Running Running 15 seconds ago s7b7h12uruzy counter.2 alexellis2/arm_redis_counter:latest donny Running Running 12 seconds ago
We can now curl the web application and they both should increment and report the counter:
$ curl -4 localhost:3000/incr {"count":13} $ curl -4 localhost:3000/incr {"count":14}
We can also scale the service to more workers if we want:
$ docker service scale counter=4 counter scaled to 4 $ docker service ls ID NAME MODE REPLICAS IMAGE 3k3rcebdxpxp counter replicated 4/4 alexellis2/arm_redis_counter:latest melg4lzwro15 redis replicated 1/1 alexellis2/redis-arm:v6
We can also check on which nodes the counter service runs:
$ docker service ps counter ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS dmnw6re5h31v counter.1 alexellis2/arm_redis_counter:latest mikey Running Running 2 minutes ago s7b7h12uruzy counter.2 alexellis2/arm_redis_counter:latest donny Running Running 2 minutes ago pap6i8ufk5dm counter.3 alexellis2/arm_redis_counter:latest leo Running Running 17 seconds ago 8x89pmd5epty counter.4 alexellis2/arm_redis_counter:latest raph Running Running 18 seconds ago
Nice! That they all happen to run on the four zero’s (leo, donny, mickey and raph) is simply because of the order they were started in; docker swarm will by default pick the service with the least number of processes on it. If you want to control on which node a service gets started you can do so using for example labels.
Tip
|
An easy way to restart all instances of a service is by scaling them to 0 and then back to the desired number of instances again. |
Of course we can also completely delete a service:
$ docker service rm counter counter $ docker service ls ID NAME MODE REPLICAS IMAGE tw57a8c0ixk4 redis replicated 1/1 alexellis2/redis-arm:v6
So this is all you need to be able to deploy a database backed web application on your Raspberry Pi! If you want to deploy a Java application you can use this image as a base and you can use this one for Python applications.
Bonus: Failover
So what happens when a node crashes? With a ClusterHat this is really easy and fun to demonstrate. Instead of yanking out a network cable I can just power down a zero on the command line. So to demo this I created the counter service on three nodes:
$ docker service ps counter ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 0aaom01ttl0l counter.1 alexellis2/arm_redis_counter:latest raph Running Running 4 seconds ago gjgnm04mcewg counter.2 alexellis2/arm_redis_counter:latest leo Running Running 1 second ago q5dbgcc6x0c7 counter.3 alexellis2/arm_redis_counter:latest mikey Running Running 6 seconds ago
It’s active on three of the four zero’s; Leo, Raph and Mikey. What happens if I kill Leo?
$ clusterhat off p1 Turning off P1 $ $ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 08iueg6qv6h4964wf7ahvc4ov donny Ready Active 4ktue0dbx9quqjpb7sucskg1f raph Ready Active 6xb470pw6slr6maxzcusbsx7d * splinter Ready Active Leader f4n5p4b5omih0wvlm5qiuswrs leo Down Active pvwt9odee16ctl2ui4fhiribo mikey Ready Active $ docker service ps counter ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 0aaom01ttl0l counter.1 alexellis2/arm_redis_counter:latest raph Running Running about a minute ago 5e52pnlmlygg counter.2 alexellis2/arm_redis_counter:latest donny Running Running 15 seconds ago gjgnm04mcewg \_ counter.2 alexellis2/arm_redis_counter:latest leo Shutdown Running about a minute ago q5dbgcc6x0c7 counter.3 alexellis2/arm_redis_counter:latest mikey Running Running about a minute ago
So Leo goes down (as shown in the node list) and the counter.2 process gets moved from Leo to Donny automatically! Now let’s turn it back on:
$ clusterhat on p1 Turning on P1 docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 08iueg6qv6h4964wf7ahvc4ov donny Ready Active 4ktue0dbx9quqjpb7sucskg1f raph Ready Active 6xb470pw6slr6maxzcusbsx7d * splinter Ready Active Leader f4n5p4b5omih0wvlm5qiuswrs leo Ready Active pvwt9odee16ctl2ui4fhiribo mikey Ready Active
After a while Leo is back to Ready / Active again; fully automatically. The Docker daemon starts back up on boot, remembers the Swarm it was part of and reports itself as active (assuming it can find a master).
Conclusion
Getting Docker Swarm running on my Pi was incredibly straightforward. While I am not going to give up on getting Kubernetes running on my Bramble for now I am going to shift my focus to Docker Swarm. I really hope the Kubernetes team takes a look at how easy this is; if they want developers to use their technology having a straightforward install is a must.
I’m really grateful for all the hard work Alex Ellis did on writing clear and succinct guides on so many of these topics.