TL;DR Not all code (or rather - not code in any shape) can be tested in automated way. To assure its "proneness" to automation it has to be either written with certain qualities or refactored to provide them. There are only two ways to prevent these qualities from decay & entropy: highly conscious, mature & disciplined engineering or continuous process of automating & maintaining automated tests. Ad-hoc, "jump-in" automating efforts are in most cases a wasted energy & additional technical debt, w/o any significant benefit.

One of the most consistent behaviours in software delivery is that whenever first sign of crisis appears (progress slowdown, approaching deadline, avalanche of CRs/SIRs), first victims are always automated tests. Of course there are organisations that perfectly understand the implications of such sacrifice, but let's face the truth - majority finds such a decision as the most pragmatic way to go: lacking tests can always be added later, during "the calm times", right?

Well, not really.

Writing tests as a filler?

In some organisations there are just no "calm times" at all, but this is another kind of story. Let's assume that the biting winds have went quiet, dark clouds have parted & your project can finally bath in full sunlight ... "The Boss" comes & announces suddenly:

"I fought for it like a lion & here you have it - whole 4 sprints to finally automate the tests! What are you waiting for? Start automating!"

In fact it's usually more than enough to get people excited & eager to start fixing the gap. But this particular scenario never ever works out and not just because testing is hard & requires continuous practice (as ad-hoc jumping-in may end up with bouncing off the wall ...). To get through & truly understand the key reason, we have to clarify one, crucial term first - testability.

Testability is a rather impalpable measure of how test-prone ("test friendly") particular piece of code (within SUT - System Under Test!) is.

Informal testability scale isn't linear nor single dimensional, but to keep it simple let's assume that it extends from various variants of "not testable at all" (or "testable only in theory") to "feasibly testable", so it's not a nice-to-have feature but rather a potential showstopper. It's not possible to overdo testability (make code "too testable" or "beyond the level of just-enough testability"), but of course it's always possible to over-engineer code (that won't increase testability).

Test != Test

In more plain words - if writing code is to be accompanied (or even preceded) by writing tests (in code, of course), certain qualities have to be built into SUT code immediately & straight away. Otherwise they may be easily omitted & re-introducing them later tends to be much more challenging (due to complexity, volume, inertia, dependencies, risk of regression, etc.).

Examples of such qualities (these may vary, depending on language / platform / technology / application style & paradigms used):

  • following Single Responsibility Principle
  • distinguished public interfaces (contracts), devoid of implementation details leaking
  • moderate to high cohesion within elements of composition
  • proper encapsulation of logic (not shattering it between layers / technologies)
  • limited, clear execution context (hence FP is so nice in terms of testability)

Code that was not designed & written with testability in mind will quite likely miss many (if not all ...) of such qualities. Tests written against such code potentially ...

  1. ... have very complex (& "heavy", & expensive to rebuild) context
  2. ... rely directly on implementation internals (that are likely to change anytime)
  3. ... do not represent real-life scenarios (e.g. if state has several mutating accessors, that are hardly in-control)
  4. ... span testing scope across numerous coupled components, hence increasing the overall brittleness (atomic change in 1 function crashes dozens of various tests)

Well, it IS possible to write some tests, even ones with high degree of code coverage (which is the only test-related metrics known to 95% of managers), but these will be hardly reliable at all, their (regular) maintenance cost will be huge & what is the most important - they will just not fulfil their role: ensure software quality.

I'm not trying to say that once something has be created initially w/o automated tests it's not possible to add them afterwards. It IS possible, but most likely such operation will require a lot of refactoring, sometimes really inside-out one. And this refactoring will be even more challenging as you have not automated tests to quickly detect breaking stuff in process ... It's not without a reason that TDD is considered more a design-aid practice than a purely test-related one.

Pic: © Mike Brodie, "A Period of Juvenile Prosperity"

Share this post