Docker is The Thing these days. Pretty much everyone has heard about it - how lean, lightweight & useful it is. Most likely you have seen the eye-catchy logo, went through very approachable tutorial, watched some basic demos. But did you try to look through the hype to verify whether it can be actually useful for you? No? Here's your opportunity, let's break some myths & look for actual value.

Obvious stuff is ... obvious

I expect you to know the basic marketing about Docker:

  • it is "the virtualization 2.0" - significantly reduces the overhead on each "VM" (which is not really a VM anymore), removes the HyperVisor (& its license costs!) itself, moves "virtualization" to the level of Operating System (as it's built into the Linux kernel) to avoid any redundant intermediate layers
  • container definition is minimalistic & extremely simple - significantly easier than the approach introduced by automated provisioning tools like Puppet or Chef
  • it provides the platform of exchange for the images (Docker Hub) where people can freely publish & exchange their "recipes"

Yadda, yadda, yadda. But why should developers care for bloody virtualization? Regardless of whether it is 1.0, 2.0 or even 3.0 ... Actually, there may be a point.

The bright side

Here's why I think you may get interested in Docker:

Game Saved

When I think about Docker, the first thing that comes to my mind is old school arcade gaming. When I was struggling with beating a non-trivial part of a game, I was saving game state every now & then (usually before challenging encounter that required luck or focus) & loading it after each failed attempt - to reset my state as if nothing bad happened.

This is how Docker works - each change to the container is a layer on the top of the image (save game), with a shared image lying in the bottom as a foundation. You shape & tune layers in isolation, but whenever you're not happy with the result, you just dump the layer you don't like with a guarantee that nothing else is affected (load game). That's the guarantee you didn't have when you've used apt, Chef or ol' good make install.

Isolation is for real

Docker container separation is for real. Each of your applications (solution components) by design doesn't see other ones at all. At first glance it may seem like a burden & actual problem to overcome, but in fact it forces you learn the details of interactions between components & the environment around them:

  • which ports should be open?
  • what files should be accessible?
  • are these REALLY decoupled components that can run on separate machines (as they share nothing locally)?

etc. One of the very positive side effects is that such isolation strongly forces you to separate code from data (storage, etc.) - code containers are disposable & can easily be redeployed in a repeatable way, but it doesn't work this way with data, of course ...

Immutability

I'm a dedicated advocate of immutable environments & this is how Docker really works. To update or correct your environment, you apply the deltas (layers) or re-create the container using updated prescription (Dockerfile). If you don't mess it (it's always possible to mess stuff if you try hard ...), it remains 100% transparent & traceable.

Why is it so important? I think you've all heard the Docker slogan: "It works on your environment? Ship your environment!" - it is actually possible & ... it is cool:

  • you are able to run software in pretty much the same conditions on your dev machine & in production
  • you're able to move the delta between the environments in a way that preserves this correspondence from the point above (in both ways -> to deploy the change and to help with production error troubleshooting)

Let's play LEGO

At last, but not least - something I like most about Docker. Due to strong emphasis on sharing as much as possible (of base image, I mean) between containers (Union File System, etc.), having multiple containers running in the same time (on a single machine) is not an insane idea anymore (as the overhead on each additional container is rather negligible).

What is more, it's quite nicely manageable, due to tools like docker-compose, that help you with basic definition & orchestration of several, interacting containers. So what? FINALLY, I'm able to run whichever components of bigger solution I'd like to run on my machine. Only the components I need at this very moment (e.g. to reproduce the bug). And I can change their composition in few seconds, with a minimum inertia.

This isn't cool. This is OMFG-cool. No more knitted & darned 'frankensteiny' environment imitations - both inept in pretending it resembles actual environment & overwhelming the machine to the ground with its heavyweight.

Problems Docker doesn't serve ...

Unsurprisingly, Docker is not panacea & it won't heal Ebola (yet). There are few other things it won't do either:

The Curse of F5

Regardless of how easy it is to set a running application(s) using Docker, nothing will beat "magical F5" (like "Debug" or "Run" in Visual Studio). The gap (convenience one) between so-called "developer's debug running" & Docker-deployed environment remains. The first option is still faster & more intuitive for Devs, so it doesn't matter how hard you try - there will be some who will resist using Docker. And as long as we live with this gap (which becomes more of a psychological barrier than a physical one), we won't get rid of some problems with duplicated configuration, troublesome error replication, etc.

No tool is idiot-proof

Docker can't do amazing stuff on its own, it's always the person with a keyboard who drives the actions. And you can do plenty of stupid things with a low-level tool like Docker, pretty much completely neglecting any value it could bring.

Some examples:

  • non-Dockerfiled container prescriptions
  • putting everything in a single container
  • embedding state within container (DB)
  • bloating image size due to lack of knowledge regarding how layering works
  • force-prying containers' isolation open

Docker for Windows is a mirage (yet)

Microsoft seems really hooked on the idea of lightweight containers & the effects are clearly visible - Windows Nano is around the corner, Docker Toolbox is available for Windows. The point is that "Docker for Windows" as it's available now is not what you may have imagined ...

  1. containers are not built into any of RTM Windows' kernels for now (until Windows Server 2016)
  2. Docker Toolbox IS running Docker containers on Windows, but ... within a tiny, specialized Linux distribution (named boot2docker) running on VirtualBox!

So, yes, if you're using Windows machine you can run programs using Docker, but only if these programs can run on Linux. And if you accept the overhead of running this additional, traditional VM (ok, it's roughly around ~300 MB, so it's not that bad).

Final thoughts

The truth is that I'm getting addicted to Docker. Or rather, to be precise, to docker-compose (mentioned above). However the truth is that majority of development I do nowadays (Elixir, Scala) is cross-platform, so being restricted to Docker on Windows ain't a real deal-breaker for me (as it is for native Win Devs).

It just looks too good (as another way to boost the developer's agility & shorten the development cycle time) to neglect it just like that. Not today & maybe not tomorrow, but somehow I'm convinced I'll try Docker in the production environment of a commercial endeavor soon enough.

Share this post