Wednesday, September 16, 2015

Running Kubernetes on a Raspberry PI

Running the Docker engine on Raspberry Pi is a breeze thanks to the Docker pirates from Hypriot, just download the image and flash it on your Pi and you are off to the races. I am not going to cover this installation process, it is well documented on the Hypriot website and I also wrote a recipe in the Docker cookbook. Roughly, download the .img file and dd it to your SD card, then boot your PI.
Having Docker on Raspberry Pi offers tons of possibilities for hobbyist and home devices. It also triggered my interest because Kubernetes, one of the Docker orchestrators, can be run standalone on a single node using Docker containers. I wrote a post several months ago about doing it using docker-compose. So I decided to give it a try last week-end, running Kubernetes on a PI using the Hypriot image that has the Docker engine.

Getting etcd to run

The first issue is that Kubernetes currently uses etcd, and that you need to run it on ARM. I decided to get the etcd source directly on the PI and updated the Dockerfile to build it directly there. Etcd uses a Golang ONBUILD image and it was causing me grief. So I copied the content of the ONBUILD image and created a new Dockerfile based on hypriot/rpi-golang to build it directly. You can see the Dockerfile. With that you have a Docker container running etcd on ARM.

Getting the Hyperkube to run on ARM

Now, I needed the hyperkube binary to run on ARM. Hyperkube is a single binary that allows you to start all the Kubernetes components. Thankfully there are some binaries already available for ARM. That was handy because I struggled to compile Kubernetes directly on the PI.
With that hyperkube binary on hand, I built an image based on the resin/rpi-raspbian:wheezy image. Quite straightforward:
FROM resin/rpi-raspbian:wheezy

RUN apt-get update
RUN apt-get -yy -q install iptables ca-certificates

COPY hyperkube /hyperkube

The Kubelet systemd unit

The Kubernetes agent running on all nodes in a cluster is called the Kubelet. The Kubelet is in charge of making sure that all containers supposed to be running on the node actually do run. It can also be used with a manifest to start some specific containers at startup. There is a good post from Kelsey Hightower about it. Since The Hypriot image uses systemd I took the systemd unit that creates a Kubelet service directly from Kelsey's post:
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/bin/kubelet  \
--api-servers=http://127.0.0.1:8080 \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
The kubelet binary is downloaded directly from the same location as hyperkube. The manifest is a Kubernetes Pod definition that starts all the containers to get a Kubernetes controller running. It starts etcd, the API server, the scheduler, the controller and the service proxy, all using the hyperkube image built above.

Now the dirty hack

Kubernetes does something interesting. All containers in a Pod actually use the same IP address. This is done by running a fake container that just does nothing. The other containers in the Pod just share the same network namespace as this fake container. This is actually called the pause container. I did not find a way to specify a different image for the pause container in Kubernetes, it seems hard coded to gcr.io/google_containers/pause:0.8.0 which off course is supposed to run on x86_64.
So the dirty trick consisted in taking the pause Goland code from the Kubernetes source, compiling it on the PI using the hypriot/rpi-golang and sticking the binary in a SCRATCH image and tagging it locally to appear as gcr.io/google_containers/pause:0.8.0 and avoid the download of the real image that runs on x86_64. Yeah...right...I told you dirty but that was the quickest way I could think of.

Putting it all together

Now that you have all the images ready directly on the PI, plus a Kubelet service, you can start it. The containers will be created and you will have a single node Kubernetes cluster on the PI. All is left is to use the kubectl CLI to use it. You can download an ARM version of Kubectl form the official Kubernetes release.
HypriotOS: root@black-pearl in ~
$ docker images
REPOSITORY                       TAG         
hyperkube                        latest
gcr.io/google_containers/pause   0.8.0
etcd                             latest
resin/rpi-raspbian               wheezy   
hypriot/rpi-golang               latest 

HypriotOS: root@black-pearl in ~
$ ./kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
kube-controller-black-pearl   5/5       Running   5          5m
HypriotOS: root@black-pearl in ~
$ ./kubectl get nodes
NAME          LABELS                               STATUS
black-pearl   kubernetes.io/hostname=black-pearl   Ready

Get it

Everything is on GitHub at https://github.com/skippbox/k8s4pi including a horrible bash script that does the entire build :)

12 comments:

  1. Hello Sebastien,
    Thanks for your post.
    From what I see systemd is not installed by default on hypriotos or raspbian but even after I installed it the systemctl command sneds some error that I didn't manage to resolve.
    Any prerequisites missing here?

    Thanks
    Jean-Francois

    ReplyDelete
    Replies
    1. Hi, can't tell exactly right now, but I probably installed the Jessie/Beta from hypriot. version 0.5 http://blog.hypriot.com/downloads/ .Did you install this or 0.4 ?

      Delete
    2. I installed the wheezy one. Let me try with Jessie. Thanks

      Delete
    3. With the 0.5 version of hypriotos, systemd is installed and all works fine.
      Thanks.

      Delete
  2. Anonymous6:44 AM

    Would I be able to use the functions of etcdctl on this?

    ReplyDelete
  3. Jay W.5:42 AM

    Thanks so much for this Sebastien, this looks truly awesome- I read that etcd had problems on 32 bit, have you run into any errors with etcd after your hack method?

    ReplyDelete
    Replies
    1. Hi Jay, no did not see any problems, but I did not test extensively.

      Delete
  4. Andrew Baumhauer8:10 AM

    I'm building etcd on RPi, and the latest branch is not passing the tests for the storage backend. I suspect if you go back and run the unit tests, you will find that etcd is not really working. The issue seems to be around the 64 bit integer alignment in certain structures not being on 8 byte boundaries, but I'm still trying to work through this. Another issues is the storage layer (on top of BoltDB) is trying to allocate 10GB of memory mapped space, and an RPi doesn't have that memory or swap provisioned.

    ReplyDelete
    Replies
    1. you might want to check hypriot cluster lab, it is going to be more up to date.

      Delete
  5. Would I be able to use the functions of etcdctl on this?
    teefury|teefury coupon

    ReplyDelete