Setting your applications may get really messy - especially on Linux: different applications have different configuration approaches, they keep their binaries & data in different file-system locations (these differ because apps may have origins within varying Linux distributions). Once you set things up once, it's pretty hard to restore the previous state, especially if you've made some other changes in the meantime (for instance - installed something else). That's why provisioning tools like Puppet, Chef, Ansible or Salt got that much popularity recently - but even with them, creating the cookbooks / recipes may still be very challenging: Linux is famous of not being fool-proof: system itself won't prevent you from doing the mess.

There's a new sheriff in town

Fortunately, there's another new tool that may end up as a true game-changer for Linux DevOps: Docker - an open source platform for "packaging applications & their dependencies" in a lightweight way.

What the hell does it mean?

Docker makes you able to isolate & execute separate applications (that may run on the same or different machines) on Linux platform - what is truly important:

  • it uses low-level kernel mechanisms to do that without a need for resource-consuming virtualization for more details - check here
  • juggling with Docker images (saved snapshots) & containers (runtime isolated application containers) is ultra-fast when compared with fiddling with virtual images
  • containers are very flexible - one may contain several applications or just one & you can run as many containers as you want in the same time
  • the concept of layers running on the top of images (that are running within containers) makes you able to conveniently construct your applications (every operation can be easily UNDOed/REDOed - by applying or removing the layers without an impact on layers beneath)
  • it perfectly supports 'same recipe for all environments' paradigm

How does it work in practice?

Or at least - how did it work for me:

  1. I've specified Docker as a provisioner in my local Vagrant environment - this feature is available since 1.6. I believe I don't have to clarify what Vagrant is and why it's an absolute MUST-HAVE for everyone who does x-platform development.

  2. Docker has automatically downloaded a set of OS images for my Linux distribution (the one that's running on hypervisor used by Vagrant).

  3. Now I can create my containers (isolated, running applications) in at least two ways:

    • either by making a 'prescription' (Dockerfile) with the list of commands (that utilizes a very simple DSL) to execute over the top of bare Linux image <- this seems to be a prefered way (and it makes sense)

    • or by creating a fresh container with a running terminal, so you can attach & start executing commands on your own (to achieve the target you're aiming for)

	sudo docker build
    ...
    or
    ...
    sudo docker run -i -t <image_name> /bin/bash    

There are few important points to remember about

  1. If all the commands in your Dockerfile or executed by using run docker subcommand end their execution (because they are not deamons) - container will close & disappear!

  2. While the container is running, you can quite conveniently (among others):

    • check the output of commands within a container (docker logs)
    • attach to running container (docker attach)
    • list the actual changes in the filesystem within container (remember - different containers don't see other containers' changes!) (docker diff)
    • expose (& map) the ports from within the container (for instance - if you're setting up an application with published endpoints inside the container) (EXPOSE inside the Dockerfile, -p switch for Docker)
  3. If you create your container by manual executing commands (run), you need to save image on your own - first find the container on running container list and then make a commit:

	sudo docker ps
    sudo commit <container_name>

Using all the options above ...

... I can:

  • compose my local environment using any combination of application components - adding / removing a running component is as simple as bringing up / killing the container and once I shutdown one, it disappears completely from the filesystem
  • create independent, isolated application containers in a very convenient way (UNDO/REDO) without time-consuming virtual image fiddling & unnecessary OS burdening
  • experiment a lot more without a significant risk of being force to start over again (because this Chef recipe is outdated / broken ...)

... and while I do all of that, my OS remains rock-solid: I won't break anything, I can withdraw easily any changes I make, setup of different containers doesn't impact each other.

As a result - within few weeks time Docker has established its well-deserved place among my core set of software development tools: I can't imagine doing my JVM-based development (I do it on Linux mainly) without it anymore.

Share this post