There are no two dev machines that are completely alike: same software installed (& uninstalled) in the same order, same security patches applies, same OS settings, same IDE configuration. And that's great of course - developers are expected (hopefully in your company as well) to:

  • search for new "toys" - tools / libraries / add-ons / anything that could aid their work & improve their productivity
  • learn new stuff that's somehow related to their projects as it may end up totally applicable in their work
  • quickly switch between projects that require absolutely different sets of tools & machine setup

You can easily imagine the consequences:

  1. Preparing a new developer machine requires a sequence of manual "magical actions" that has never been written down (aka tribal knowledge) & each time requires to be re-discovered
  2. Even slight differences in installed software may cause project artifacts to behave in a different way (because for instance some people have an IIS bug patched & some still don't)
  3. Sometimes different versions of the same software can even collide with each other (when installed together on the same machine)

What is more, such a snowflake (one & only, unique) setup of dev machine differs a lot from official test environments, which by default should be at least free of pure dev tools (like IDE) which may make a lot of difference as well.

This so called "works on my machine" problem is being addressed by modern DevOps using the following techniques:

  1. separate, isolate & virtualize your test environment
  2. use clear & widely known ISO management rules
  3. automate software provisioning (even down to infrastructure level!)
  4. separate configuration from artifacts & make sure that all tier of environments use the same artifacts
  5. automate your deployment

By test environment I don't just mean the official, shared test environment - you can apply these mechanisms to your dev workstation:

  • you keep dev tools (IDEs, debuggers, profilers, etc.) you use all the time installed on your master
  • you run your compiled artifacts on the locally virtualised development environment (guest), with some automation to install all the additional, necessery crap - like dedicated OS, RDBMS, MOM, etc.

Few past years have brought up several tools & techniques to aid with such an approach, but until now I haven't seen a complete, working setup for end-to-end Windows development (dev on Windows, run on Windows). Fortunately, it seems that things have changed recently ...

Packer + Chef + Vagrant + VirtualBox

Perfect match made in dev-heaven:

  1. Packer enables you to create an image of ready to use, set-up virtual machine with all the necessary software pre-installed & pre-configured. Packer is known for some time already, but now, with packer-windows it's finally possible to use Windows templates (based on Windows ISO images as a basis) to create images with Windows pre-installed. It's a game changer.

  2. Chef is my favourite provisioning tool that works great with Packer & Vagrant, but to be honest you don't have to use it - PS scripts are more than enough to install whatever you want in the headless mode (without UI interaction). This way, you're not limited to bare, fresh image with just an OS installed, you can add any software you'd like to: SQL Server, ActiveMQ, Erlang VM - whatever you want (and can script the installation for).

  3. VirtualBox is the only OSS hypervisor I know. Yes, you don't have to pay for it (this doesn't apply for guest OSes, but it's a different part of story, right?). I use it for few years already and I never had real reasons to complain - solid stuff that's widely supported.

  4. Vagrant - I've praised Vagrant so many times, that I'll skip this part once :) A-MUST-USE. Works perfectly on Windows - since forever.

Why does this combo make such a difference?

Because now you don't need pre-made Windows Vagrant boxes (and it was always a problem ...)! Packer can create a box for you, based on the ISO you provide: if you're subscribed to MSDN, you've plenty of those available, otherwise you still can use trial.

What is more - using this approach:

  • you use EXACTLY the same approach to use your dev & test environment (you can use the same provisioning scripts & the same ISO images) - effectively eliminating "works on my machine" syndrome
  • you can build-up the repeatability & automation of deployment procedures since the very beginning of the development
  • re-creating the vanilla environment is a blitz & if you repeat it 10 times in the same conditions (same artifacts, same configuration) you'll get the exactly same result 10 times
  • your dev machine is burdened only with the software that is really, really required for current project. For instance: if project A requires DB2 instance, but project B requires MySQL:
    • when you work on A, you use the image with only DB2 available
    • when you work on B, you use the image with only MySQL available
  • the actual test environment (virtualised) is free of workstation-specific software that would never appear on production server: like Microsoft Office

Gather up the facts

All this effort to be able to test the applications you develop (which is the work you already do)? Oh yes. And believe me, you'll get some additional trouble on the way & as a result - you're just getting out of "clueless F5 world", but this is the future, unevitable one. Because of:

  • ever-increasing complexity of applications & their integration
  • raise of in-the-cloud dev/test environments
  • the need for low-coupling & clear boundaries between applications (I need to install your components, because my components call them, but I have no intention to learn anything about deploying them)

If you didn't try this approach yet, it's high time to make the 1st move.

Share this post