IT projects is what I do for living and I’ve to admit that I’ve done quite a few. What is more, I’m proud to tell that whenever I was somehow involved in technical architecture, there was some kind of automated build & deployment set up: I may be a bit dogmatic here, but for me a project with fully manual build & deployment is more a student lab project than a professional work. Anyway, even if we decided to spend a more serious effort on that, we’ve never got really close to something called "Infrastructure-as-a-Code" (

Why … What part did we miss?

Most likely - the hardest one :) Everything that was below the level of application itself and its custom configuration. To be precise: OS installation and configuration, non-artifact setup (for instance: web server, RDBMS, network), etc. There were quite a few reasons:

  1. We had a strong resistance against virtualization (which is a key factor in keeping all levels of environments 100% corresponding) - mainly because of  three reasons: performance “tax”, license costs and full regression test necessity while moving from physical to logical machine.
  2. We usually were lacking deep DevOps skills in terms of OS configuration automating - as deployment automation was usually done by hardcore Devs only. Market lacks experienced DevOps (at least in Poland) so these roles are usually filled as most junior people (and treated as boring and pain in the … neck).
  3. We didn’t really ever had developers commitment - they were as accustomed to their traditional way of work, that they were very reluctant to changes. And sometimes we sticked ourselves to our “Ivory Tower” and we didn’t see some tiny details that were a key game-changers / showstoppers for them.

But … is that really a problem? What’s the benefit of doing IaaC?

Based on my experience it’s the only way to achieve the following benefits:

  • Environment creation agility - you can spawn a new environment whenever you like, involving as wide set of apps you like and ready to dump it as soon as you like
  • 100% rollout reliability - if you’re updating the environments at least once a day every day and the “human factor” is limited to “fire and forget”, all your deployment mechanisms are forged and hardened in “the heat of battle” so production deployment is reduced to boring and trouble-less routine.
  • No “tribal” knowledge - no-one has to remember about every damn stupid configuration step created 10 years ago. You can re-create your environment from scratch within seconds without desperately asking for help (and finding out that no-one is able to aid you…).
  • Easier trouble-shooting as your local environment is as close to the production one as possible

And the gain multiplies with increasing complexity (homogenous OSes and platforms) of your environments.

What is the key to do real IaaC then?

  1. Reasonable virtualization - I’m giving a go to Vagrant ( - virtualization abstraction layer that works great with provisioning software) and so far I’m very happy with the outcome.
  2. Proper dependency mgmt - not only in runtime, but also in the build time, so Continuous Integration runs efficiently and build cycle is as concise as possible.
  3. Abstracted configuration - you need to strictly separate the variables from non-variable part of configuration. Non-variable part is the one that doesn’t really change between the environments (so is a code artifact, not a true configuration artifact).
  4. Environment automated provisioning - using tools like Chef ( or Puppet ( - they are absolutely critical to provide 100% compatibility between environment setup and eliminate space for human errors.

Point #1 and #4 are by far the hardest for us to achieve at the moment. Keep the fingers crossed, we’re quite likely to give it another go in the nearest future.

Share this post