Set up of Jenkins Master Slave with Kubernetes Docker.

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. I got a requirement to set up Docker based Jenkins Continuous Integration environment which is scalable on the fly using Kubernetes container services. Digging through any open source solutions yet discovered, I decided to come up with Draft solution. This work pretty well with all initial assumptions and required needs.


Architecture workflow mainly consists three major components

  1. Jenkins Cluster Manager

  2. Jenkins Kubernetes Cluster

  3. Private Docker Registry

1. Jenkins Cluster Manager

Cluster Manager is a processor instance running with Python Flask and Celery framework. It manages the Jenkins cluster through Kubernetes API. It has responsibility to communicate with Jenkins Master and works on the incoming instruction of ramp up and ramp down of slave nodes.

It manages the cluster through resizing Jenkins slave replication controller and if needed ramp up and ramp down minion nodes to accommodate the increase in the load of Jenkins jobs. It returns newly created slave instances(minions) to the Jenkins Master, so Master can ssh into slaves and execute jobs.

2. Jenkins Kubernetes Cluster

Kubernetes is an open source implementation of container cluster management. Jenkins Kubernetes cluster consists one master node and a group of multiple and expendable minion nodes. Minions cluster do have a Jenkins Master instance and remaining Jenkins Slave instance. Jenkins slave nodes in Kubernetes cluster are elastic and can be easily ramp up and ramp down through kubernetes API with replication controller.

Jenkins Master by itself is the basic installation of Jenkins and in this configuration the master handles all tasks for your build system.It controls available jobs to process, and if the number of jobs in a queue increase after a specified limit, it kicks sshd and start a slave agent. It also takes the decision either to ramp up or ramp down slave agents nodes on the fly. To make this in reality, it set up two-way communication with the Jenkins Cluster Manager, an outside processor manages the cluster behavior with resizing replication controllers and ramp up and ramp down of minion nodes.

Replication Controllers

There are two main replication controller which manages the Jenkins Master and Jenkins Slave pod respectively.

  1. Jenkins Master replication controller :

    Master replication controller manages the execution of Jenkins Master docker image in cluster. It configures the one minion with Jenkins Master instance. Throughout the lifecycle, it will work with size of one most of the time.

  2. Jenkins Slave replication controller:

    Slave replication controller manages the execution of Jenkins Slave docker image in cluster. It configures one to many minions with slave instance. Throughout the lifecycle, it resizes the running instances of slave images as per the instruction given by Jenkins cluster manager(Stormy).

3. Private Docker Registry

Private docker registry is a GCE instance manages the docker images as a repository. It's only exposed to the internal nodes of Jenkins cluster. All nodes can pull and push docker instances.

Setup Kubernetes cluster on local

Instructions for setting up Kubernetes with GCE on your local machine.

Step-by-step guide

  • Download and install boot2docker
  • Clone the Kubernetes repo:
  • git clone
  • Run cluster/ from the kubernetes directory. This will fail since the cluster already exists, but will create the necessary local files.
  • Copy keys from kubernetes-master to home folder:
  • ssh into kubernetes-master to confirm that can connect

    gcloud compute ssh --zone us-central1-b kubernetes-master

  • Go to /usr/share/nginx and execute chmod +r * , incase you are doing first time
  • Copy all certificates from Master to your host. Fire these commands from the host
      gcloud compute copy-files kubernetes-master:/usr/share/nginx/ca.crt ~ --zone <zone>
      gcloud compute copy-files kubernetes-master:/usr/share/nginx/kubecfg.crt ~ --zone <zone>
      gcloud compute copy-files kubernetes-master:/usr/share/nginx/kubecfg.key ~ --zone <zone>
  • Rename the keys:

    mv ca.crt
    mv kubecfg.crt .kubecfg.crt
    mv kubecfg.key .kubecfg.key
  • Run cluster/ list pods to confirm that all is connected

Build & Deployment.

All the instruction about the build and deployment as been added in README in a git repository. Feel free to checkout and update in case necessary. I am sure many things are messed up.


This project certainly has a lot of bandwidth for refactoring and rearchitecture. Couple of things I feel can be good for refactoring

  • Develop Stormy solution in terms of Jenkins Plugin: Currently Stormy is running on a separate node and managing the Kubernetes Master and Slave. In this solution, I think we can redesign the Stormy to be a part of Jenkins Plugin, which can be installed on any Jenkins Master server. Stormy should only control slaves using Kubernetes REST API, with the interim status of Jenkins Master.