Shortening delivery cycle and improving the overall delivery pipeline gets more & more love nowadays. Delivering value more frequently, in a more predictable manner, while remaining truly transparent - who could resist that? That's why 'DevOps' & 'Continuous Delivery' are trending that high in 2014.

Quite recently I've had a discussion with a more senior colleague of mine - we were discussing a particular scenario (that had some issues) in the financial market. We didn't have the first person perspective on this particular case, but we had access to some materials that shed some light into whole story - it didn't took me much time to identify the more or less obvious flaws in what those particular guys were doing:

  • lack of versioning approach for artifacts
  • parameterization & configuration managed manually (& prepared separately each time it was needed)
  • manual, xcopy-style deployments (whooooah!)
  • ad-hoc, rare work-stream syncs
  • application architecture heterogenity
  • non-existent test automation
  • etc.

Truly, this was quite boring, so I don't want to dig through all of that ATM. I've just pointed these things out to my colleague with a brief suggestion of what's needed. The thing that has truly shocked me was my collegue's reaction:

He said that:

"... fixing the technical / architectural issues from the list above is meaningless. What kind of improvement it could bring? 5%?"

5% of what? What the hell did he just quantify? ;D But it was just a beginning:

"The thing that truly matters are the business processes and the project management aspect. We don't have any analysis of that, but usually you can improve at least 30%, if you focus on these areas. If I were them, I would apply ol' good, formal METHODOLOGY ..."

That was the very moment I've stopped listening of course. The density of non-sense could pose a health risk.

How to get there?

Let's put the M-word aside. Instead, let's focus on what's REALLY CRITICAL to set up a high-quality, stable, predictable development process with a very frequent delivery:

  1. Each iteration (as short as possible, but still delivering value) has to end-up with a deployable, 100%, regression-free version. Even in a mid-sized Enterprise it's not possible to do full regression test runs for each release if you want to deploy a new version each 2 weeks. What it really means is that:

    • the code should be in stable, deployment-ready state pretty much all the time, after each change (re-gaining the sufficient quality is not possible without big effort)
    • testing has to go on in parallel with development and the feedback loop has to be pretty much immediate: developer needs to know within minutes that he has broken something
    • testing environments need to be fully integrated all the time, so these are the real-life scenarios that are tested
    • if manual testing is needed, it should be perfectly clear what exactly (scope of tests) has to be tested (to avoid needless, demotivating, repeatable work)
  2. To make sure that 'work on my machine' syndrome won't happen, deployment itself should be as transparent & painless as possible - it should be as tested as the code itself. And preferrably even more automated (to avoid human mistakes):

  • you have to be able to re-create the past state of the environment
  • preferrably, the introduced changes should be switchable (on/off) to withdraw them ASAP without any kind of rollback
  • to make the deployment pipeline error-resilient & predictable, the level of correspondence between test & production environments should be as high as possible: any kind of difference is a risk
  1. Obviously, to just consider what's written above, you need rock-solid, stable environments your testers may trust and rely on. To get there, you have to find a way to:

    • verify the correctness of the environment (for instance: by checking the candidate build) before letting testers in
    • swap builds on the environment without reducing availability
    • give the means to quickly fix the potential fault to make sure that testers are provided with the new stuff to test

Well ...

... I don't know about you, but IMHO all of the above are practically restrained by technical / architectural means - frequent & controlled delivery is obviously about:

  • automation
  • automation
  • and even more AUTOMATION

Cleary, your process may be dysfunctional - you may have issues with stakeholder participation, there may be a lack of ownership, transparency or communication flow may suck big-time: I've seen all those cases before and these should be fixed, but the KEY ENABLERS of something that even pretends to aim for being Continuous Delivery are purely technical.

Without proper tech approach you're limited to manual operations and you end up with expensive, time-consuming, 90s-style delivery process. No PRINCE2/PMP/RUP or other proprietary meth will aid you much: neither perfectly shaped roles, best XLS worksheets & template documents, inventorized and code-stamped requirements nor even the most beautiful Gantt charts.

To summarize: I think it was Jez Humble who has said / written that, but I can't find it on web, so it may have been someone else - I don't remember the exact statement, but I'll try to keep the intent clear:

"Continuous Delivery readiness is achievable by purely technical means. Whether the company as a whole embraces it and utilizes properly is a separate story that comes over the top of tech-based foundation."

Share this post