TL;DR Building software while "on auto-pilot" doesn't work well - operational automation is more than advised, but understanding (in-depth) why-you-do what-you-do is absolutely essential. However, there are certain practices many stick to, without really understanding their purpose & descendant effects. One of them is "staging" - an anti-pattern that should in general be avoided, but when applied properly, can temporarily reduce the risks & help in transformation towards more mature delivery model. However, too many thoughtlessly treat it as a permanent element of their delivery pipeline, w/o thinking about its consequences & accompanying "taxes".

If I reach back to my initial months as a software professional, I can recall one specific topic that was fundamental in my everyday work, yet I wasn't even aware of its existence until I've been hired - no-one has ever mentioned it during my (Computer Science) studies. I've encountered it frequently since then, in fact I still do (even if it's considered some sort of anti-pattern), but I have a feeling that overall preparation to handle / deal with it & practical knowledge (what is it for, when does it work & when does not) about it remains very low.

This topic is named staging, it's all about how software "matures up" to production quality & I'd like to cover its basics in today's post.

What is it

Staging is an activity of promoting products of your work (can be either source code or its processed form - e.g. assemblies, binaries, deployment packages, Docker containers) to subsequently more mature stages of development process, so that with each such step:

  • access/manipulation rigours are tighter & more strict
  • testing is more thorough
  • expected maturity (stability, performance, usability, ...) is higher & closely resembles production (or equivalent)

... until achieved quality is satisfactory (so stuff can be deployed on production).


Basically, concept of staging was an answer to bad practice of developing & testing in production :) but that's not the whole story. Historically, earlier stages of development accepted higher degree of freedom - a lot of manual manipulation (uploading binaries "around" the deployment mechanism, manual DB or config modifications, etc.): this was OK for half-baked, work-in-progress versions, but once real-deal testing is taking place (e.g. by the paying client or scarce-availability SMEs) you have to make sure that the version you've made available ...

  1. is stable (won't break randomly with each change)
  2. doesn't change uncontrollably (e.g. during important test)
  3. contains only what's expected to be delivered within particular workstream

How is (was) it done

Evolution of staging was always tightly coupled to evolution of version control. In ancient times, code deltas were bundled into so-called "changesets" or "patches" - FIFO queues of incremental changes. In the most simple scenario "staging" was just about moving the position in the queue - this position was dissecting queue between "staged" & "yet-unstaged" changesets.


Next step of evolution was about creating the concept of branch - unlike in modern Git, historical branches were usually physically separate & code was being moved between branches by physical copying of the changesets. This enabled some flexibility ("cherrypicking"!) & simplified everyday's work (all what particular Dev cared for was which branch to work on).


As the technology got more & more complex, solutions got more & more moving parts - at some point it was more than clear that stability ain't just about code itself: setting up a new environment with all its dependencies, integrations & shaky persistent state was a challenge itself. In fact, each deployment was actually considered a risk (!), that can destabilize (!!) an environment deployed previously.

That's why, in this pre-DevOps era, engineers have discovered the concept of staging across environments ... Single branch wasn't just deployed on a single environment, but on a series (2+) of environments according to the following rules:

  • the "lower" the environment, the lower its maturity (e.g. predictability) - i.a. because more people have FS/DB access (!) to fix stuff manually (!!)
  • the "lower" the environment, the higher chance things are a bit shaky & ... the higher the chance no-one will care ("well, there will be another deployment in an hour ...")
  • the "lower" the environment, the lower the quality expectations (acceptable bug quantities / severities)
  • the "higher" the environment, the higher is the threshold of fix approval (so no-one destabilizes a mature environment with trivial issue fix)
  • once product increment got to expected quality level, it is "promoted" ("staged") to the "next level" environment, where standards are higher, deployment frequency is lower, etc.

Obviously production is considered the end-game in this chain, so the final "staging" has it as its target.

Then and now

Today "staging" is an anti-pattern & an anachronism, but ... it doesn't mean it's not used commonly in the industry. Yes, even today.

It's an anti-pattern, because it's against the principle of bulding the quality in. Software is supposed to be at consistent production-like quality level nearly all the time, ready to be deployed each sprint or even continuously. Optimally, there should be no need for "stabilization periods", "code freezes", "integration sprints", "full regression tests".


Going further:

  • trunk-based development is considered the end-game branching model
  • deployment is supposed to be fully automated & its automatic logic has to be shared acrossed the lines & auto-tested like any other piece of code
  • big part of testing can be performed on actual production, secured by the mechanisms like canary releases, feature toggles or A/B testing

That pretty much means that having more than 1 environment per branch is an overkill, having more than 1 long-living branch is an unecessary liability, etc.


All that sounds great, but that's not where the majority is (yet). For various reasons, e.g.:

  • some are too constrained - by very heterogenous architecture, dated technology, bad engineering, etc.
  • some don't want/need it - maturity can have a big pricetag to pay, sometimes the "tax" of doing stuff the old way is (paradoxically) more affordable
  • some are still in their journey - ripening of your development process can take a lot of time
  • and of course there are many who just suck big-time at this game & they'll never rise above their own mediocrity

... and the point is ...

... that in fact "staging" doesn't suck per se. It doesn't disqualify the company from being able to deliver good products, neither it means it's doomed & never will be able to move forward out of 90s :)

"Staging" is a reasonable temporary solution that may help you gain some time & reduce delivery risks WHILE you in the same time work on maturing up your tooling & processes. "Staging" may be fine as long as you understand:

  • why you're doing this
  • why you're doing this THAT WAY (because there are many available ...)
  • what's your plan to move out of it

If you just dumbly deploy the same branch on several physical environments while wondering why they behave in a different way, it just proves you're clueless & stuck without understanding what you're really doing ...

Share this post