Pippo's blog

it is all about software development

Dockerized Jenkins

Cross-posted in nerds.petrofeed.com, Sep 15th 2015.

Docker meets Jenkins

A Continuous Integration (CI) server is something crucial for our day-to-day software engineering work. We can’t survive without it.

Jenkins was the chosen foundation a while back. It is extensible and provides flexibility as a hosted server. It is not the perfect software and there are many other options out there for you to try out and decide what better fit your needs.

A recurring problem we faced with our Jenkins server was related to the fact that all its jobs shared the host configurations. Such problem presented itself many times when an engineer change a configuration or installed a new dependency on the server to support one specific job and end up breaking others.

Another problem with the initial setup: it was quiet difficult to recreate the server when something catastrophic happen or to fix an error create in the scenario described above.

How to get a better CI solution keeping it simple, flexible and reproducible? I present you dockerized-jenkins.

Dockerized-Jenkins is an in house solution we created to improve our CI infrastructure. The idea is to make the server reproducible and manage job dependencies in isolation.

To approach this idea we decided to use Docker. The Jenkins Server itself stops being a piece of software we install on an AWS instance running Linux to become a simple docker container. The container, though, is now the piece of software that run on an AWS instance. The official docker image provided by the Jenkins team (https://github.com/jenkinsci/docker) allows us to easily run the server without worrying about installation instructions. An extra cool thing is that we can even manage Jenkins plugins through this when building the docker image.

One problem solved. The server can be managed programatically and we can re-create as many copies as we want/need to. Starting over after a catastrophic event is just a mater of running the deployment scripts again.

Starting the default Jenkins container service as is makes all jobs run inside the container too, same as a regular jenkins server, only now everything is running within that docker container. We still want to isolate the job’s dependencies from each other and not need to install any of those dependencies in the Jenkins image. This is the other part of the problem that dockerized-jenkins solves.

We installed the docker client commands in the Jenkins container and configured it in a way it can connect to its host server through the docker service socket. This allow any Jenkins job to trigger docker commands that will be executed in the host (although it doesn’t need to be the same host, it could even connect to another docker host/cluster anywhere).

The project provides 2 commands that can be used by any job (remember that job here refers to jenkins’ jobs): run.on.docker and run.with.compose.

Those commands assume that a job has a Dockerfile in the root of the project workspace and that it is the base for a docker image where everything else will run from.

run.on.docker parameters are basically the command we want for the build process of our project. This is how we use it:

1
2
Usage:
     ./jenkins/bin/run.on.docker <image_name> <command>

run.with.compose is even simpler. Everything is managed through a docker-compose.yml file which does support linking containers together and have a better way of managing environment variables:

1
2
Usage:
    ./jenkins/bin/run.with.compose [--help] [docker-compose-file]

The end result of all this is that jobs configurations are now easy. Simple commands.

Of course we cannot get rid of all the complexity our builds require. What this approach does is to give the responsibility back to the project, meaning everything is in a single place: the project repository. Once your project has a Dockerfile that fulfills all its dependencies and a docker-compose.yml that manages how it links to other containers, there are no more headaches.

Comments