Thursday, April 09, 2015

1 command to Kubernetes with Docker compose

After 1 command to Mesos, here is 1 command to Kubernetes.

I had not looked at Kubernetes in over a month. It is a fast paced project so it is hard to keep up. If you have not looked at Kubernetes, it is roughly a cluster manager for containers. It takes a set of Docker hosts under management and schedules groups of containers in them. Kubernetes was open sourced by Google around June last year to bring all the Google knowledge of working with containers to us, a.k.a The people :) There are a lot of container schedulers or orchestrators if you wish out there, Citadel, Docker Swarm, Mesos with the Marathon framework, Cloud Foundry lattice etc. The Docker ecosystem is booming and our heads are spinning.

What I find very interesting with Kubernetes is the concept of replication controllers. Not only can you schedule groups of colocated containers together in a cluster, but you can also define replica sets. Say you have a container you want to scale up or down, you can define a replica controller and use it to resize the number of containers running. It is great for scaling when the load dictates it, but it is also great when you want to replace a container with a new image. Kubernetes also exposes a concept of services basically a way to expose a container application to all the hosts in your cluster as if it were running locally. Think the ambassador pattern of the early Docker days but on steroid.

All that said, you want to try Kubernetes. I know you do. So here is 1 command to try it out. We are going to use docker-compose like we did with Mesos and thanks to this how-to which seems to have landed 3 days ago, we are going to run Kubernetes on a single host with containers. That means that all the Kubernetes components (the "agent", the "master" and various controllers) will run in containers.

Install compose on your Docker host, if you do not have it yet:

curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Then create this YAML file, call it say k8s.yml:

etcd:
  image: kubernetes/etcd:2.0.5.1
  net: "host"
  command: /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
master:
  image: gcr.io/google_containers/hyperkube:v0.14.1
  net: "host"
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  command: /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
proxy:
  image: gcr.io/google_containers/hyperkube:v0.14.1
  net: "host"
  privileged: true
  command: /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
  

And now, 1 command:

$ docker-compose -f k8s.yml up -d

Quickly there after, you will see a bunch of containers pop-up:

$ docker ps
CONTAINER ID        IMAGE                                       
56c36dc7bf7e        nginx:latest               
a17cac87965b        kubernetes/pause:go  
659917e61d3e        gcr.io/google_containers/hyperkube:v0.14.1
caf22057dbad        gcr.io/google_containers/hyperkube:v0.14.1
288fcb4408c7        gcr.io/google_containers/hyperkube:v0.14.1
820cc546b352        kubernetes/pause:go  
0bfac38bdd10        kubernetes/etcd:2.0.5.1                               
81f58059ca8d        gcr.io/google_containers/hyperkube:v0.14.1                     
ca1590c1d5c4        gcr.io/google_containers/hyperkube:v0.14.1

In the YAML file above, you see in the commands that it used a single binary hyperkube that allows you to start all the kubernetes components, the API server, the replication controller etc ... One of the components it started is the kubelet which is normally used to monitor containers on one of the host in your cluster and make sure they stay up. Here by passing the /etc/kubernetes/manifests it helped us start the other components of kubernetes defined in that manifest. Clever ! Note also that the containers where started with a host networking. So these containers have the network stack of the host, you will not see an interface on the docker bridge.

With all those up, grab the kubectl binary, that is your kubernetes client that you will use to interact with the system. The first thing you can do is list the nodes:

$ ./kubectl get nodes
NAME        LABELS    STATUS
127.0.0.1   <none>    Ready

Now start your first container:

./kubectl run-container nginx --image=nginx --port=80

That's a simple example, where you can actually start a single container. You will want to group your containers that need to be colocated and write a POD description in YAML or json than pass that to kubectl. But it looks like they extended kubectl to take single container start up. That's handy for testing.

Now list your pods:

$ ./kubectl get pods
POD           IP           CONTAINER(S)         IMAGE(S)                                    
nginx-127                  controller-manager   gcr.io/google_containers/hyperkube:v0.14.1
                           apiserver            gcr.io/google_containers/hyperkube:v0.14.1 
                           scheduler            gcr.io/google_containers/hyperkube:v0.14.1                                                         
nginx-p2sq7   172.17.0.4   nginx                nginx                                      

You see that there is actually two pods running. The nginx one that you just started and one pod made of three containers. That's the pod that was started by your kubelet to get Kubernetes up. Kubernetes managed by Kubernetes...

It automatically created a replication controller (rc):

$ ./kubectl get rc
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR              REPLICAS
nginx        nginx          nginx      run-container=nginx   1

You can have some fun with the resize capability right away and see a new container pop-up.

$ ./kubectl resize --replicas=2 rc nginx
resized

Now that is fine and dandy but there is no port exposed on the host, so you cannot access your application on the outside. That's where you want to define a service. Technically it is used to expose a service to all nodes in a cluster but of course you can bind that service proxy to a publicly routed interface:

$ ./kubectl expose rc nginx --port=80 --public-ip=192.168.33.10

Now take your browser and open it at http://192.168.33.10 (if that's the IP of your host of course) and enjoy a replicated nginx managed by Kubernetes deployed in 1 command.

You will get more of that good stuff in my book, if I manage to finish it. Wish me luck.

Running the CloudStack Simulator in Docker

CloudStack comes with a simulator. It is very handy for testing purposes, we use it to run our smoke tests on TravisCI for each commit to the code base. However if you want to run the simulator, you need to compile from source using some special maven profiles. That requires you to check out the code and setup your working environment with the dependencies for a successfull CloudStack build.

With Docker you can skip all of that and simply download the cloudstack/simulator image from the Docker Hub. Start a container from that image and expose port 8080 where the dashboard is being served. Once the container is running, you can use docker exec to configure a simulated data center. This will allow you to start fake virtual machines, create security groups and so on. You can do all of this through the dashboard or using the CloudStack API.

So you want to give CloudStack a try ? Use Docker :)

$ docker pull cloudstack/simulator

The image is a bit big and we need to work on slimming it down but once the image is pulled, starting the container will be almost instant. If you feel like sending a little PR just the Dockerfile, there might be a few obvious things to slim down the image.

$ docker run -d -p 8080:8080 --name cloudstak cloudstack/simulator

The application needs a few minutes to start however, something that I have not had time to check. Probably we need to give more memory to the container. Once you can access the dashboard at http://localhost:8080/client you can configure the simulated data-center. You can choose between a basic network which gives you L3 network isolation or advanced zone which gives you a VLAN base isolation:

$ docker exec -ti cloudstack python /root/tools/marvin/marvin/deployDataCenter.py -i /root/setup/dev/basic.cfg

Once the configuration completes, head over to the dashboard http://localhost:8080/client and check your simulated infrastructure

Enjoy the CloudStack simulator brought to you by Docker.

Wednesday, March 18, 2015

1 Command to Mesos with Docker Compose

If you have not tried Docker, you should. The sheer power it puts in your hands and the simplicity of the user experience will just wow you. In this post, I will show you how to start a one node Mesos setup with Docker compose.

Docker announced compose on February 26th. Compose allows you to describe a multi-container setup and manage it with one binary docker-compose. The containers and volumes combinations managed by Compose are defined in a YAML file, super easy to read and super easy to write. The UX is very similar to the Docker CLI.

When compose was released, I tried it and was a bit underwhelmed, as it is basically a relooking of Fig. This is not unexpected as Docker Inc, acquired Orchard the makers of Fig. But I was expecting more added functionality and even a tighter integration with the Docker client (something a dev branch actually prototyped), even a common release instead of a separate binary. I am sure this will come.

As I am writing the docker cookbook, I have deployed Wordpress 20 different ways, and it's getting a bit boring !  I was looking for more information on Mesos and its support for Docker, I re-read a terrific blog post that showed how to start a Mesos setup (zookeeper, master, slave, marathon framework) in 7 commands. Can't beat that.

When I re-read this post, I automatically thought this was an exciting use case for docker-compose. One YAML file to start Mesos/Zookeeper/Marathon and experiment with it. Of course I am not talking about a production multi-node setup. I am just looking at it for an easy Mesos experiment.
I will spare you the details of installing compose (just a curl away). The dockers docs are great.

So here is the YAML file describing our Mesos setup:

zookeeper:
  image: garland/zookeeper
  ports:
   - "2181:2181"
   - "2888:2888"
   - "3888:3888"
mesosmaster:
  image: garland/mesosphere-docker-mesos-master
  ports:
   - "5050:5050"
  links:
   - zookeeper:zk
  environment:
   - MESOS_ZK=zk://zk:2181/mesos
   - MESOS_LOG_DIR=/var/log/mesos
   - MESOS_QUORUM=1
   - MESOS_REGISTRY=in_memory
   - MESOS_WORK_DIR=/var/lib/mesos
marathon:
  image: garland/mesosphere-docker-marathon
  links:
   - zookeeper:zk
   - mesosmaster:master
  command: --master zk://zk:2181/mesos --zk zk://zk:2181/marathon
  ports:
   - "8080:8080"
mesosslave:
  image: garland/mesosphere-docker-mesos-master:latest
  ports:
   - "5051:5051"
  links:
   - zookeeper:zk
   - mesosmaster:master
  entrypoint: mesos-slave
  environment:
   - MESOS_HOSTNAME=192.168.33.10
   - MESOS_MASTER=zk://zk:2181/mesos
   - MESOS_LOG_DIR=/var/log/mesos
   - MESOS_LOGGING_LEVEL=INFO

Four containers, images pulled from Docker hub, some ports exposed on the host. Some container linking and some environment variables used to configure the Mesos slave and master. One small hickup in the Slave defintion. You will see that I set the MESOS_HOSTNAME to the IP of the host. This allows me to browse the stdout and stderr of a Marathon task, otherwise we cannot reach it easily (small improvement to be done there.)

Launch this with docker-compose:

$ ./docker-compose up -d
Recreating vagrant_zookeeper_1...
Recreating vagrant_mesosmaster_1...
Recreating vagrant_marathon_1...
Recreating vagrant_mesosslave_1...

And open your browser at http://IP_HOST:5050 then follow the rest of the blog to start a task in marathon.



Bottom line, I went from '7 commands to Mesos' to '1 command to Mesos' thanks to Docker-compose and a fairly simple YAML file. Got to love it. When compose can do this across Docker hosts in a Docker Swarm started by Machine. Then the real fun will begin !

Tuesday, March 03, 2015

Rancher on RancherOS

Someone at Rancher must have some cattle in the middle of Arizona or in the backcountry of California. Or one of their VCs might be in Montana sitting in a big ranch while Docker is eating the IT world. In any case, this post is short and sweet like veggies and not like cattle (TFW) and is about Rancher and the newly announced RancherOS. Check out the rancheros announcement.

Let's keep this short, shall we ? Docker is great, but it is a daemon running on a single host. Since you want to scale :) and operate multiple servers, you need something to manage your Docker containers across multiple hosts. Several solutions are emerging, of course Docker Swarm but also Kubernetes, Lattice from Cloudfoundry and even Apache Mesos. Rancher is one of these cluster management solutions for Docker. It does some nice things like cross-hosts container linking through a custom built network overlay (think Flannel, Weave, Socketplane).

You can use Rancher with any set of Docker hosts. However, a new type of operating systems have started to appear. Container optimized OS. Or Just Enough Operating System for Docker. CoreOS, ProjectAtomic from RedHat, Ubuntu Snappy fit in that space. They aim to provide rolling atomic upgrades to the OS and run everything in it as a container. No more package manager, magic happens and you are always up to date. Package all your apps in containers, and use Rancher to run them in your cluster. End of story. Wait, enters rancherOS.

RancherOS

A couple lines of bash make all the talking:

$ git clone https://github.com/rancherio/os-vagrant.git
$ cd os-vagrant
$ vagrant up
$ vagrant ssh
[rancher@rancher ~]$ docker version
Client version: 1.5.0
…

rancherOS is a super minimalistic OS exclusively for Docker. It goes further and also runs system services as container themselves. And I will let @ibuildthecloud talk about systemd and Docker as PID 1.

[rancher@rancher ~]$ sudo system-docker ps
CONTAINER ID        IMAGE               COMMAND                ...      NAMES
32607470eb78        console:latest      "/usr/sbin/console.s   ...      console             
d0420165c1c0        userdocker:latest   "/docker.sh"           ...      userdocker          
375a8de12183        syslog:latest       "/syslog.sh"           ...      syslog              
d284afd7f628        ntp:latest          "/ntp.sh"              ...      ntp   

The next logical question is of course....drum roll... Can I run rancher on rancheros. RinR not R&R ? And the answer is a resounding yes. I expect Rancher to come out in the next weeks maybe months with a solid product based on the two.

Rancher

If you are interested to try out RinR then check out the Ansible playbook I just made. You can use use it to deploy a cluster of rancherOS instances in AWS, and use one of them as a master and the others as workers. The master runs in a container:

$ docker run -d -p 8080:8080 rancher/server 

And the workers can register with their agent:

$ sudo docker run --rm -it --privileged -v /var/run/docker.sock:/var/run/docker.sock rancher/agent http://<master_ip>:8080

Once all the workers have registered you can use the UI or the API to start containers.


As you can see I tested this at web scale with two nodes :)

Notes

In this very early super bleeding-edge testing phase (as you can tell in my good spirit today), I did find a few things that were a bit strange. Considering rancherOS was announced just last week, I am sure things will get fixed. Cloud-init support is minimal, not able to add second network interface, support for both keypair and userdata at the same time seems off. The UI was a bit slow to start and building the overlay was also a bit slow. It is also possible that I did something wrong.

Overall though, rancher is quite nice. It builds on years of experience in the team with developing CloudStack and operating clouds at scale and applies it to the Docker world. It does seem that they want to integrate with and provide the native Docker API, this would mean that users will be able to use Docker machine to add hosts to a rancher cluster, or even Docker swarm and that launching a container would also be a docker command away. How that differentiates from Swarm itself is not yet clear, but I would bet we will see additional networking and integration services in Rancher. Blurring the lines with Kubernetes ? Time will tell.

Thursday, January 29, 2015

O'Reilly Docker cookbook

The last two months have been busy as I am writing the O'Reilly Docker cookbook at night and on week-ends. CloudStack during the day, Docker at night :) You can read the very "drafty" preface on Safari and you will get a sense of why I started writing the book.

Docker is amazing, it brings a terrific user experience to packaging application and deploying them easily. It is also a software that is moving very fast with over 5,500 pull requests closed so far. The community is huge and folks are very excited about it, just check those 18,000+ stars on Github.

Writing a book on Docker means reading all the documentation, reading countless blogs that are flying through twitter and then because its a cookbook, you need to get your hands dirty and actually try everything, test everything, over and over again. A cookbook is made of recipes in a very set format: Problem, Solution, Discussion. It is meant to be picked up at anytime, opened at any page and read a recipe that is independent of all the others. The book is now on pre-release, it means that you can buy it and get the very drafty version of the book as I write it, mistakes, typos and bad grammar included. As I keep writing you get the updates and once I am done you of course get the final proof-read, corrected and reviewed version.

As I started writing, I thought I would share some of the snippets of code I am writing to do the recipes. The code is available on GitHub at the how2dock account. How2dock should become a small company for Docker training and consulting as soon as I find spare time :).

What you will find there is not really code, but really a repository of scripts and Vagrantfiles that I use in the book to showcase a particular feature or command of Docker. The repository is organized the same way than the book. You can pick a chapter and then a particular recipe then go through the README.

For instance if you are curious about Docker swarm:

$ git clone https://github.com/how2dock/docbook.git
$ cd ch07/swarm
$ vagrant up

This will bring up four virtual machines via Vagrant and do the necessary boostrapping to get the cluster setup with Swarm.

If you want to run a wordpress blog with a mysql database, checkout the fig recipe:

$ cd ch07/fig
$ vagrant up
$ vagrant ssh
$ cd /vagrant
$ fig up -d

And enjoy Wordpress :)

I put a lot more in there. You will find an example of using the Ansible Docker module, a libcloud script to start an Ubuntu Snappy instance on EC2, a Dockerfile to help you create TLS certificates (really a convenience container for testing TLS in Docker). A Docker machine setup and a recipe on using Supervisor.

As I keep writing, I will keep putting all the snippets in this How2dock repo. Except frequent changes, typos, errors...and corrections :)

And FWIW, it is much scarier to put a book out in pre-release unedited than to put some scripts up on GitHub.

Suggestions, comments, reviews all welcome ! Happy Docking !

Thursday, October 02, 2014

CloudStack simulator on Docker

Docker is a lot of fun, one of its strength is in the portability of applications. This gave me the idea to package the CloudStack management server as a docker image.

CloudStack has a simulator that can fake a data center infrastructure. It can be used to test some of the basic functionalities. We use it to run our integration tests, like the smoke tests on TravisCI. The simulator allows us to configure an advanced or basic networking zone with fake hypervisors.

So I bootstrapped the CloudStack management server, configured the Mysql database with an advanced zone and created a docker image with Packer. The resulting image is on DockerHub, and I realized after the fact that four other great minds already did something similar :)

On a machine running docker:

docker pull runseb/cloudstack
docker run -t -i -p 8080:8080 runseb/cloudstack:0.1.4 /bin/bash
# service mysql restart
# cd /opt/cloudstack
# mvn -pl client jetty:run -Dsimulator

Then open your browser on http://<IP_of_docker_host>:8080/client and enjoy !

Tuesday, September 30, 2014

On Docker and Kubernetes on CloudStack

On Docker and Kubernetes on CloudStack

Docker has pushed containers to a new level, making it extremely easy to package and deploy applications within containers. Containers are not new, with Solaris containers and OpenVZ among several containers technologies going back 2005. But Docker has caught on quickly as mentioned by @adrianco. The startup speed is not surprising for containers, the portability is reminiscent of the Java goal to "write once run anywhere". What is truly interesting with Docker is that availability of Docker registries (e.g Docker Hub) to share containers and the potential to change the application deployment workflows.

Rightly so, we should soon see IT move to a docker based application deployment, where developers package their applications and make them available to Ops. Very much like we have been using war files. Embracing a DevOps mindset/culture should be easier with Docker. Where it becomes truly interesting is when we start thinking about an infrastructure whose sole purpose is to run containers. We can envision a bare operating system with a single goal to manage docker based services. This should make sys admin life easier.

The role of the Cloud with Docker

While the buzz around Docker has been truly amazing and a community has grown over night, some may think that this signals the end of the cloud. I think it is far from the truth as Docker may indeed become the killer app of the cloud.

A IaaS layer is what is: an infrastructure orchestration layer, while Docker and its ecosystem will become the application orchestration layer.

The question then becomes: How do I run Docker in the cloud ? And there is a straightforward answer: Just install Docker in your cloud templates. Whether on AWS or GCE or Azure or your private cloud, you can prepare linux based templates that provide Docker support. If you are aiming for the bare operating system whose sole purpose is to run Docker then the new CoreOS linux distribution might be your best pick. CoreOS provides rolling upgrades of the kernel, systemd based services, a distributed key value store (i.e etcd) and a distributed service scheduling system (i.e fleet)

exoscale an Apache CloudStack based public clouds, recently announced the availability of CoreOS templates.

Like exoscale, any cloud provider be it public or private can make CoreOS templates available. Providing Docker within the cloud instantly.

Docker application orchestration, here comes Kubernetes

Running one container is easy, but running multiple coordinated containers across a cluster of machines is not yet a solved problem. If you think of an application as a set of containers, starting these on multiple hosts, replicating some of them, accessing distributed databases, providing proxy services and fault tolerance is the true challenge.

However, Google came to the resuce and announced Kubernetes a system that solves these issues and makes managing scalable, fault-tolerant container based apps doable :)

Kubernetes is of course available on Google public cloud GCE, but also in Rackspace, Digital Ocean and Azure. It can also be deployed on CoreOS easily.

Kubernetes on CloudStack

Kubernetes is under heavy development, you can test it with Vagrant. Under the hood, aside from the go code :), most of the deployment solutions use SaltStack recipes but if you are a Chef, Puppet or Ansible user I am sure we will see recipes for those configuration management solution soon.

But you surely got the same idea that I had :) Since Kubernetes can be deployed on CoreOS and that CoreOS is available on exoscale. We are just a breath away from running Kubernetes on CloudStack.

It took a little more than a breath, but you can clone kubernetes-exoscale and you will get running in 10 minutes. With a 3 node etcd cluster and a 5 node kubernetes cluster, running a Flannel overlay.

CloudStack supports EC2 like userdata, and the CoreOS templates on exoscale support cloud-init, hence passing some cloud-config files to the instance deployment was straightforward. I used libcloud to provision all the nodes, created proper security groups to let the Kubernetes nodes access the etcd cluster and let the Kubernetes nodes talk to each other, especially to open a UDP port to build a networking overlay with Flannel. Fleet is used to launch all the Kubernetes services. Try it out.

Conclusions.

Docker is extremely easy to use and taking advantage of a cloud you can get started quickly. CoreOS will put your docker work on steroid with availability to start apps as systemd services over a distributed cluster. Kubernetes will up that by giving you replication of your containers and proxy services for free (time).

We might see pure docker based public clouds (e.g think Mesos cluster with a Kubernetes framework). These will look much more like PaaS, especially if they integrate a Docker registry and a way to automatically build docker images (e.g think continuous deployment pipeline).

But a "true" IaaS is actually very complimentary, providing multi-tenancy, higher security as well as multiple OS templates. So treating docker as a standard cloud workload is not a bad idea. With three dominant public clouds in AWS, GCE and Azure and a multitude of "regional" ones like exoscale it appears that building a virtualization based cloud is a solved problem (at least with Apache CloudStack :)).

So instead of talking about cloudifying your application, maybe you should start thinking about Dockerizing your applications and letting them loose on CloudStack.

Friday, July 11, 2014

GCE Interface to CloudStack

Gstack, a GCE compatible interface to CloudStack

Google Compute Engine (GCE) is the Google public cloud. In december 2013, Google announced the General Availability (GA) of GCE. With AWS and Microsoft Azure, it is one of the three leading public clouds in the market. Apache CloudStack now has a brand new GCE compatible interface (Gstack) that lets users use the GCE clients (i.e gcloud and gcutil) to access their CloudStack cloud. This has been made possible through the Google Summer of Code program.

Last summer Ian Duffy, a student from Dublin City University participated in GSoC through the Apache Software Foundation (ASF) and worked on a LDAP plugin to CloudStack. He did such a great job that he finished early and was made an Apache CloudStack committer. Since he was done with his original GSoC project I encouraged him to take on a new one :), he brought in a friend for the ride: Darren Brogan. Both of them worked for fun on the GCE interface to CloudStack and learned Python doing so.

They remained engaged with the CloudStack community and has a third year project worked on an Amazon EC2 interface to CloudStack using what they learned from the GCE interface. They got an A :). Since they loved it so much, Darren applied to the GSoC program and proposed to go back to Gstack, improve it, extend the unittests and make it compatible with the GCE v1 API.

Technically, Gstack is a Python Flask application that provides a REST API compatible with the GCE API and forwards the requests to the corresponding CloudStack API. The source is available on GitHub and the binary is downloadable via PyPi. Let's show you how to use it.

Installation and Configuration of Gstack

You can grab the Gstack binary package from Pypi using pip in one single command.

pip install gstack

Or if you plan to explore the source and work on it, you can Clone the repository and install it by hand. Pull requests are of course welcome.

git clone https://github.com/NOPping/gstack.git
sudo python ./setup.py install

Both of these installation methods will install a gstack and a gstack-configure binary in your path. Before running Gstack you must configure it. To do so run:

gstack-configure

And enter your configuration information when prompted. You will need to specify the host and port where you want gstack to run on, as well as the CloudStack endpoint that you want gstack to forward the requests to. In the example below we use the exoscale cloud:

$ gstack-configure
gstack bind address [0.0.0.0]: localhost
gstack bind port [5000]: 
Cloudstack host [localhost]: api.exoscale.ch
Cloudstack port [8080]: 443
Cloudstack protocol [http]: https
Cloudstack path [/client/api]: /compute

The information will be stored in a configuration file available at ~/.gstack/gstack.conf:

$ cat ~/.gstack/gstack.conf 
PATH = 'compute/v1/projects/'
GSTACK_BIND_ADDRESS = 'localhost'
GSTACK_PORT = '5000'
CLOUDSTACK_HOST = 'api.exoscale.ch'
CLOUDSTACK_PORT = '443'
CLOUDSTACK_PROTOCOL = 'https'
 CLOUDSTACK_PATH = '/compute'

You are now ready to start Gstack in the foreground with:

gstack

That's all there is to running Gstack. To be able to use it as if you were talking to GCE however, you need to use gcutil and configure it a bit.

Using gcutil with Gstack

The current version of Gstack only works with the stand-alone version of gcutil. Do not use the version of gcutil bundled in the Google Cloud SDK. Instead install the 0.14.2 version of gcutil. Gstack comes with a self-signed certificate for the local endpoint gstack/data/server.crt, copy the certificate to the gcutil certificates file gcutil/lib/httplib2/httplib2/cacerts.txt. A bit dirty I know, but that's a work in progress.

At this stage your CloudStack apikey and secretkey need to be entered in the gcutil auth_helper.py file at gcutil/lib/google_compute_engine/gcutil/auth_helper.py.

Again not ideal but hopefully gcutil or the Cloud SDK will soon be able to configure those without touching the source. Darren and Ian opened a feature request with google to pass the client_id and client_secret as options to gcutil, hopefully future release of gcutil will allow us to do so.

Now, create a cached parameters file for gcutil. Assuming you are running gstack on your local machine, using the defaults that were suggested during the configuration phase. Modify ~/.gcutil_params with the following:

--auth_local_webserver
--auth_host_port=9999
--dump_request_response
--authorization_uri_base=https://localhost:5000/oauth2
--ssh_user=root
--fetch_discovery
--auth_host_name=localhost
--api_host=https://localhost:5000/

Warning: Make sure to set the --auth_host_name variable to the same value as GSTACK_BIND_ADDRESS in your ~/.gstack/gstack.conf file. Otherwise you will see certificates errors.

With this setup complete, gcutil will issues requests to the local Flask application, get an OAuth token, issue requests to your CloudStack endpoint and return the response in a GCE compatible format.

Example with exoscale.

You can now start issuing standard gcutil commands. For illustration purposes we use Exoscale. Since there are several semantic differences, you will notice that as a project we use the account information from CloudStack. Hence we pass our email address as the project value.

Let's start by listing the availability zones:

$ gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com listzones
+----------+--------+------------------+
| name     | status | next-maintenance |
+----------+--------+------------------+
| ch-gva-2 | UP     | None scheduled   |
+----------+--------+------------------+

Let's list the machine types or in CloudStack terminology: the compute service offerings and to list the available images.

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com listimages
$ gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com listmachinetypes
+-------------+----------+------+-----------+-------------+
| name        | zone     | cpus | memory-mb | deprecation |
+-------------+----------+------+-----------+-------------+
| Micro       | ch-gva-2 |    1 |       512 |             |
+-------------+----------+------+-----------+-------------+
| Tiny        | ch-gva-2 |    1 |      1024 |             |
+-------------+----------+------+-----------+-------------+
| Small       | ch-gva-2 |    2 |      2048 |             |
+-------------+----------+------+-----------+-------------+
| Medium      | ch-gva-2 |    2 |      4096 |             |
+-------------+----------+------+-----------+-------------+
| Large       | ch-gva-2 |    4 |      8182 |             |
+-------------+----------+------+-----------+-------------+
| Extra-large | ch-gva-2 |    4 |     16384 |             |
+-------------+----------+------+-----------+-------------+
| Huge        | ch-gva-2 |    8 |     32184 |             |
+-------------+----------+------+-----------+-------------+

You can also list firewalls which gstack maps to CloudStack security groups. To create a securitygroup, use the firewall commands:

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com addfirewall ssh --allowed=tcp:22
$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com getfirewall ssh

To start an instance you can follow the interactive prompt given by gcutil. You will need to pass the --permit_root_ssh flag, another one of those semantic and access configuration that needs to be ironed out. The interactive prompt will let you choose the machine type and the image that you want, it will then start the instance

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com addinstance foobar
Selecting the only available zone: CH-GV2
1: Extra-large  Extra-large 16384mb 4cpu
2: Huge Huge 32184mb 8cpu
3: Large    Large 8192mb 4cpu
4: Medium   Medium 4096mb 2cpu
5: Micro    Micro 512mb 1cpu
6: Small    Small 2048mb 2cpu
7: Tiny Tiny 1024mb 1cpu
7
1: CentOS 5.5(64-bit) no GUI (KVM)
2: Linux CentOS 6.4 64-bit
3: Linux CentOS 6.4 64-bit
4: Linux CentOS 6.4 64-bit
5: Linux CentOS 6.4 64-bit
6: Linux CentOS 6.4 64-bit
<...snip>
INFO: Waiting for insert of instance . Sleeping for 3s.
INFO: Waiting for insert of instance . Sleeping for 3s.

Table of resources:

+--------+--------------+--------------+----------+---------+
| name   | network-ip   | external-ip  | zone     | status  |
+--------+--------------+--------------+----------+---------+
| foobar | 185.1.2.3    | 185.1.2.3    | ch-gva-2 | RUNNING |
+--------+--------------+--------------+----------+---------+

Table of operations:

+--------------+--------+--------------------------+----------------+
| name         | status | insert-time              | operation-type |
+--------------+--------+--------------------------+----------------+
| e4180d83-31d0| DONE   | 2014-06-09T10:31:35+0200 | insert         |
+--------------+--------+--------------------------+----------------+

You can of course list (with listinstances) and delete instances

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com deleteinstance foobar
Delete instance foobar? [y/n]
y 
WARNING: Consider passing '--zone=CH-GV2' to avoid the unnecessary zone lookup which requires extra API calls.
INFO: Waiting for delete of instance . Sleeping for 3s.
+--------------+--------+--------------------------+----------------+
| name         | status | insert-time              | operation-type |
+--------------+--------+--------------------------+----------------+
| d421168c-4acd| DONE   | 2014-06-09T10:34:53+0200 | delete         |
+--------------+--------+--------------------------+----------------+

Gstack is still a work in progress, it is now compatible with GCE GA v1.0 API. The few differences in API semantics need to be investigated further and additional API calls need to be supported. However it provides a solid base to start working on hybrid solutions between GCE public cloud and a CloudStack based private cloud.

GSoC has been terrific to Ian and Darren, they both learned how to work with an open source community and ultimately became part of it through their work. They learned tools like JIRA, git, Review Board and became less shy with working publicly on a mailing lists. Their work on Gstack and EC2stack is certainly of high value to CloudStack and should become the base for interesting products that will use hybrid clouds.