In this post you'll read about the difference between writing automated tests & having testing automated, why testing framework or DSL used do not really matter, what does it mean "to free the mandays", how did we approach securing capacity for kick-starting test automation with a big hit.
This is the 2nd post in the series. You can find the previous episode here.
Just do it (not)!
It's a very banal truth that it's very easy to create automated tests, but it's very hard to automate testing.
Uhm, wait, what? What's the difference?
I can easily create zillion of automated tests in no time (e.g. using "smart" test generators) - runnable, calling my business feature code, lighting the green up, but altogether they would barely make any sense and/or add any value.
Proper automated testing suite has to meet several important success criteria to be truly effective - some of them don't have anything in common with technology, e.g.:
- represents the functional & non-functional expectations (both positive & negative conditions), NOT the content of your codebase
- has feedback loop short enough to associate failed case with just introduced regression (in feature code)
- be organised in a such a way that functional change corresponds to certain range of test cases and vice versa - fail in particular case can be correlated to functionality that has been just broken
- fits the particular need(s) - there's a place for various categories of tests: smoke, operational readiness, stress, performance, load, presentation logic, business logic, contract, ...
There's far more design & consideration required than it may appear. "Just doing it" may seem like a good first step, but when not aided with sufficient dose of forward thinking, it may soon end up as wasted effort.
Doing it right (?)
I didn't want to overthink the problem, but there were several pre-conditions I've set ourselves first:
- it has to be clear WHAT we want to automate
- it has to be clear this automation MAKES SENSE (it's cost-effective: effort is transformed into sufficient gain)
- we need a clear answer to the question: how do automated test conditions correspond to manual ones? (replace them? complement them? overlap them? have different usage cases?)
- starting can't take forever - we had to make sure we'll have a visible progress since the very beginning, to convince everyone (mainly ourselves) we're doing it for real
TBH, neither of these points was really trivial.
Wait, what about the following questions?
- Who's gonna do the automation? Everyone? QAs within teams? Dedicated QA Engineers out of existing teams?
- Which framework will we use? Which language? Which IDE?
- Are we gonna do it BDD style? Which convention(s) are we gonna use? Given-When-Then? Arrange-Act-Assert? Another one?
None of these questions was really crucial.
These decisions have to be made indeed, you should respect the decision once made & commit to the particular direction, but ... whatever you pick doesn't really matter that much - well, unless you go for some odd, niche solution that either lacks maturity or goes totally against the mainstream.
Otherwise, you're good - just let the peeps do the choice according to the majority's preference. As long as you have "the general buy-in", you're on the safe side.
Free the mandays!
Of the 4 numbered pre-conditions mentioned above, definitely the last one was the biggest obstacle & the greatest risk to the final success of the whole endeavour. Fortunately, one of the first real-life lessons I've learned in IT was:
To get anything done, you need to guarantee the capacity: particular (!) people's undivided (!) attention, clearly communicated priorities (!), unambiguous responsibilities (!) & (the last but not least) minimised distractions & sidetracking (!).
In one of my former work environments, we've used to call this challenge "to secure the mandays" (manday = MD = work "capacity" of 1 person for 1 full day).
Until then we were struggling with capacity to do anything. Hitherto development process utilised all the QA Engineers we had (4) in nearly 100% - for several reasons:
- in 2 week long development cycle with insufficient regression test automation there's always shitloads to test ...
- with estimated 70-80% of overall "bandwidth" spent on new feature development, there's always something new to grok through for the very first time ...
- when you're determined to compete by uncompromising quality (more about that later), the border of where you should stop (in terms of improving quality) gets blurry
Issue was very serious - we've decided to approach it from every possible angle in the same time ...
- double the QA "raw" capacity - by hiring additional QA Engineers - the priority here was not to lower the standards: all the recruits had to be able to start adding value since the very first day, each of them had to have Tester's "instincts" (inquisitiveness, perseverance, attention to detail, ability to assess impact on business, ...) AND be able to automate repetitive QA activities - we've involved 5 people in recruitment process to make sure we're able to properly (& quickly) evaluate candidates' attitude, aptitude & skills.
- simplify the release process - by eliminate complicated multi-branch dependencies that in some cases required additional QA test passes; instead we've went for a simple branching model with single, short living release branch per Sprint
- introduce internal cut-off dates - to make QA Engineers' work cycle more steady, predictable & "rhytmic" - so they can start planning more accurately the time required e.g. for uninterrupted test automation coding windows
- codify simple & common rules regarding "criticality" & prioritisation - until now, everything could have been critical, everything could be a release stopper, everything was as important to test as anything else - connect that with deep sense of responsibility & you've got a perfect formula for paralysing QA over-zealotry. Was it really that bad? Definitely it was - quality is a function of cost, what is more - this function is far from linear - one has to recognise the threshold when further invest (QA-wise) doesn't really pay off anymore.
- tune up the range of test devices/browsers/versions - the priority is (of course) to cover what our users really use AND make sure that parallel testing (e.g. on different browsers) overlap only when it's necessary (e.g. server-side processing - like financial calculations - doesn't depend on the browser, but how numbers are presented - does)
- stop the "spiral of doom" - let me illustrate it with an example: version X.1 (non-released yet) has 3 serious defects - they are promptly sent for fixing, but when it's done, we actually have fixes for 3 + 5 issues (the other 5 being non-critical); we eagerly merge them to version X.2, unfortunately introducing some unexpected regression in fixes that in fact should not block the release ... As you can easily see - our desire to release as quickly as possible was forcing us to extend QA gating & hence release less frequently (due to endless cycles of re-tests)
Making a clear improvement in ALL of these areas in the same time was absolutely CRITICAL to reach capacity high enough to make real progress with test automation.
The next part of the series can be found here.