Our Continuous Testing odyssey - part II

In this post you'll read about the difference between writing automated tests & having testing automated, why testing framework or DSL used do not really matter, what does it mean...

a month ago

Latest Post Mixing warsaw.ex by Sebastian Gebski
In this post you'll read about the difference between writing automated tests & having testing automated, why testing framework or DSL used do not really matter, what does it mean "to free the mandays", how did we approach securing capacity for kick-starting test automation with a big hit.

This is the 2nd post in the series. You can find the previous episode here.

Just do it (not)!

It's a very banal truth that it's very easy to create automated tests, but it's very hard to automate testing.

Uhm, wait, what? What's the difference?

I can easily create zillion of automated tests in no time (e.g. using "smart" test generators) - runnable, calling my business feature  code, lighting the green up, but altogether they would barely make any sense and/or add any value.

Proper automated testing suite has to meet several important success criteria to be truly effective - some of them don't have anything in common with technology, e.g.:

There's far more design & consideration required than it may appear. "Just doing it" may seem like a good first step, but when not aided with sufficient dose of forward thinking, it may soon end up as wasted effort.

Doing it right (?)

I didn't want to overthink the problem, but there were several pre-conditions I've set ourselves first:

  1. it has to be clear WHAT we want to automate
  2. it has to be clear this automation MAKES SENSE (it's cost-effective: effort is transformed into sufficient gain)
  3. we need a clear answer to the question: how do automated test conditions correspond to manual ones? (replace them? complement them? overlap them? have different usage cases?)
  4. starting can't take forever - we had to make sure we'll have a visible progress since the very beginning, to convince everyone (mainly ourselves) we're doing it for real

TBH, neither of these points was really trivial.


Wait, what about the following questions?

None of these questions was really crucial.

These decisions have to be made indeed, you should respect the decision once made & commit to the particular direction, but ... whatever you pick doesn't really matter that much - well, unless you go for some odd, niche solution that either lacks maturity or goes totally against the mainstream.

Otherwise, you're good - just let the peeps do the choice according to the majority's preference. As long as you have "the general buy-in", you're on the safe side.

Free the mandays!

Of the 4 numbered pre-conditions mentioned  above, definitely the last one was the biggest obstacle & the greatest risk to the final success of the whole endeavour. Fortunately, one of the first real-life lessons I've learned in IT was:

To get anything done, you need to guarantee the capacity: particular (!) people's undivided (!) attention, clearly communicated priorities (!), unambiguous responsibilities (!) & (the last but not least) minimised distractions & sidetracking (!).

In one of my former work environments, we've used to call this challenge "to secure the mandays" (manday = MD = work "capacity" of 1 person for 1 full day).

Until then we were struggling with capacity to do anything. Hitherto development process utilised all the QA Engineers we had (4) in nearly 100% - for several reasons:

Issue was very serious - we've decided to approach it from every possible angle in the same time ...

  1. double the QA "raw" capacity - by hiring additional QA Engineers - the priority here was not to lower the standards: all the recruits had to be able to start adding value since the very first day, each of them had to have Tester's "instincts" (inquisitiveness, perseverance, attention to detail, ability to assess impact on business, ...) AND be able to automate repetitive QA activities - we've involved 5 people in recruitment process to make sure we're able to properly (& quickly) evaluate candidates' attitude, aptitude & skills.
  2. simplify the release process - by eliminate complicated multi-branch dependencies that in some cases required additional QA test passes; instead we've went for a simple branching model with single, short living release branch per Sprint
  3. introduce internal cut-off dates - to make QA Engineers' work cycle more steady, predictable & "rhytmic" - so they can start planning more accurately the time required e.g. for uninterrupted test automation coding windows
  4. codify simple & common rules regarding "criticality" & prioritisation - until now, everything could have been critical, everything could be a release stopper, everything was as important to test as anything else - connect that with deep sense of responsibility & you've got a perfect formula for paralysing QA over-zealotry. Was it really that bad? Definitely it was - quality is a function of cost, what is more - this function is far from linear - one has to recognise the threshold when further invest (QA-wise) doesn't really pay off anymore.
  5. tune up the range of test devices/browsers/versions - the priority is (of course) to cover what our users really use AND make sure that parallel testing (e.g. on different browsers) overlap only when it's necessary (e.g. server-side processing - like financial calculations - doesn't depend on the browser, but how numbers are presented - does)
  6. stop the "spiral of doom" - let me illustrate it with an example: version X.1 (non-released yet) has 3 serious defects - they are promptly sent for fixing, but when it's done, we actually have fixes for 3 + 5 issues (the other 5 being non-critical); we eagerly merge them to version X.2, unfortunately introducing some unexpected regression in fixes that in fact should not block the release ... As you can easily see - our desire to release as quickly as possible was forcing us to extend QA gating & hence release less frequently (due to endless cycles of re-tests)

Making a clear improvement in ALL of these areas in the same time was absolutely CRITICAL to reach capacity high enough to make real progress with test automation.

To be continued ...

Sebastian Gebski

Published a month ago

Comments?

Leave us your opinion.

Subscribe our newsletter

Recieve news directly to your email.

No Kill Switch © 2018.