Pippo's blog

it is all about software development

Dockerized Jenkins

by filipesperandio

Cross-posted in nerds.petrofeed.com, Sep 15th 2015.

Docker meets Jenkins

A Continuous Integration (CI) server is something crucial for our day-to-day software engineering work. We can’t survive without it.

Jenkins was the chosen foundation a while back. It is extensible and provides flexibility as a hosted server. It is not the perfect software and there are many other options out there for you to try out and decide what better fit your needs.

A recurring problem we faced with our Jenkins server was related to the fact that all its jobs shared the host configurations. Such problem presented itself many times when an engineer change a configuration or installed a new dependency on the server to support one specific job and end up breaking others.

Another problem with the initial setup: it was quiet difficult to recreate the server when something catastrophic happen or to fix an error create in the scenario described above.

How to get a better CI solution keeping it simple, flexible and reproducible? I present you dockerized-jenkins.

Dockerized-Jenkins is an in house solution we created to improve our CI infrastructure. The idea is to make the server reproducible and manage job dependencies in isolation.

To approach this idea we decided to use Docker. The Jenkins Server itself stops being a piece of software we install on an AWS instance running Linux to become a simple docker container. The container, though, is now the piece of software that run on an AWS instance. The official docker image provided by the Jenkins team (https://github.com/jenkinsci/docker) allows us to easily run the server without worrying about installation instructions. An extra cool thing is that we can even manage Jenkins plugins through this when building the docker image.

One problem solved. The server can be managed programatically and we can re-create as many copies as we want/need to. Starting over after a catastrophic event is just a mater of running the deployment scripts again.

Starting the default Jenkins container service as is makes all jobs run inside the container too, same as a regular jenkins server, only now everything is running within that docker container. We still want to isolate the job’s dependencies from each other and not need to install any of those dependencies in the Jenkins image. This is the other part of the problem that dockerized-jenkins solves.

We installed the docker client commands in the Jenkins container and configured it in a way it can connect to its host server through the docker service socket. This allow any Jenkins job to trigger docker commands that will be executed in the host (although it doesn’t need to be the same host, it could even connect to another docker host/cluster anywhere).

The project provides 2 commands that can be used by any job (remember that job here refers to jenkins’ jobs): run.on.docker and run.with.compose.

Those commands assume that a job has a Dockerfile in the root of the project workspace and that it is the base for a docker image where everything else will run from.

run.on.docker parameters are basically the command we want for the build process of our project. This is how we use it:

     ./jenkins/bin/run.on.docker <image_name> <command>

run.with.compose is even simpler. Everything is managed through a docker-compose.yml file which does support linking containers together and have a better way of managing environment variables:

    ./jenkins/bin/run.with.compose [--help] [docker-compose-file]

The end result of all this is that jobs configurations are now easy. Simple commands.

Of course we cannot get rid of all the complexity our builds require. What this approach does is to give the responsibility back to the project, meaning everything is in a single place: the project repository. Once your project has a Dockerfile that fulfills all its dependencies and a docker-compose.yml that manages how it links to other containers, there are no more headaches.


What Should I Say?

by filipesperandio

It was about a year ago when a colleague raised this question over lunch: “What should I say when I am in a date and the guy in front of me doesn’t stop bragging himself about everything?”. Maybe those weren’t her exactly words but you got the idea. We started making jokes about what we could say in those awkward moments and one of the first thoughts from the group was “We should make an App for that!”, so we did…

We were trying out and learning some different technologies (Ionic and Firebase), the app idea was basically an excuse to apply some of the learning into some more concrete.

Two days later everybody was doing something else. Anybody that was a little bit involved was already focused on something else and definitely more important, including myself… but we had something functional.

A few weeks ago, I was searching my computer for things a left behind… I hate having incomplete things laying around, so I’ve polished what we’ve done in those couple days and decided to publish to the Play Store.

You can also find it on-line here: http://www.what-should-i-say.com

I hope you have fun looking at those phrases and also add your jokes so everybody can laugh together!!!

Android app on Google Play

One more thing, the code is very simple and don’t pay much attention to good design practices but you can have fun with it anyways on github.


Easy docker clean

by filipesperandio

UPDATE: small improvements done to the function incorporated here: 2015-09-07

Working a lot with docker recently and I Love it. Problem was keeping it from burning my hard drive with lots of images and containers files.

After a few rounds and tunning a script to clean the mess, this is what is working best for me, thought it could help more people:

docker_clean ()
    local zombie_containers=$(docker ps -a -q | grep -v "$(docker ps -q | xargs | sed 's/ /\\\|/g') ");
    local zombie_images=$(docker images --no-trunc | grep none | awk '{print $3 }');
    docker rm -v ${zombie_containers} 2> /dev/null;
    docker rmi ${zombie_images} 2> /dev/null



Being a generalist sucks

by filipesperandio

Yeah, it sucks! …and now that I have your attention I can say it is also cool and I am pretty sure I could not have invented myself too differently…

From the past few years I have worked on many different fronts… maybe not so many, but still. I don’t consider myself specialist in any of those areas and this is where it sucks. Few times I caught myself thinking about writing a new post here and getting demotivated in the next second with the thought that I don’t know the topic enough to write about it.

Around one year and a half ago, one of the leads on my team asked me what I considered myself a rock star on… crickets… I wanted to say I was an AngularJS rock star because I was very excited by that framework back there, but I couldn’t I haven’t worked enough with it to say it, although I was competent enough. I wanted to say I was an Android expert, but my Android Developer experience had started only a few months back and I already knew that even being a Java developer for many years that didn’t make me an Android expert.

Rá! Java, I know a lot of Java… well, I know Java 6 a lot and, at that point in time, Java 8 was being released and I didn’t know sh*t about it :( I was very frustrated but at the same time it made me think about some cool stuff I already had worked on and I was glad I had and have the opportunity to touch a few different things around technology.

I started my IT life as a Tech Support guys, made my way to Infrastructure Service, even got my Linux certification at some point back there… pivot my interests into development and started as tester. I was very happy to apply my previous learnings within that role and I finally started as a developer after that. I could work with a few different flavors of Java, Ruby, Ruby on Rails, JavaScript, Android Groovy, Bash - oh BASH, I still love to write scripts in BASH - and for every different language a new ecosystem had to be learnt and mastered… So I figured I didn’t have the time to become an specialist but I was happy that I had the opportunity to experiment different things.

I still don’t consider myself an specialist but I am currently working on somethings that I am enjoying and as so, I will try to write more about them because being a generalist sucks, but it is not so bad! :)


AngularJS: directives

by filipesperandio

A fair amount of web developers that start using AngularJS have previous experience with jQuery and a jQuery like solution is what come first to their minds when they see a problem that require some changes to the DOM elements. This is still true for myself as well.

What we need to keep in mind is that angular provide a very interesting way of manipulating DOM elements, a reusable and sophisticated way. They are called directives.

Why do I need directives?

The first motivation is that attaching events to elements that don’t exist make your app to not work the way you think it should =)

Hold on! What? Non-existing elements?

As we make use of angular bindings, we let the framework create and destroy DOM elements - elements inside a ng-repeat, for example, are not visible to JS code at $(document).ready() time. Sounds familiar?

Directive is a component triggered by the time your DOM element become part of the page, which is than the right time to attach to events or do further processing related to that element.

If you are a little scared about the directive’s documentation and overloaded by all things it can handle, take a look at a simplified version of it first and then learn more about it in depth and how to use it to expand templates, do animations, listen to events…