Late September 2019, I've decided that it's time for a career change, so I've started searching through the job market and talking to various companies in need of a Senior Engineering Leader. It's not that I enjoy the process of looking for a job, but I can't deny that it can be a very educative experience. Mingling with several companies within a short period of time, learning about how they "do stuff", what kind of issues they are facing & how they deal with them - it can be very revealing & can change your perspective on certain topics. Believe me, the higher the position, the more creative & enjoyable the recruitment process gets :)
That time, there was one thing that has got my particular attention - a general approach to Quality Assurance aka testing.
Where are (all) the Testers?!
To my surprise, the vast majority of the companies I was in contact with, had no dedicated QA (/Tester) position. And it was not about the lack of funds or anyone's incompetence - two of these companies were actual unicorns & the other ones are considered wildly successful as well.
Who was in charge of the testing then? All the members within the teams - namely: Software Developers. Just to make sure we're on the same page - these are enterprises with relatively big Engineering units: at least 40+ Developers on-board (up to 100-120).
Is that wrong? Not at all! I'm rather used to "the unwritten rule" that says that the bigger the Engineering team, the more dedicated focus has to be put on "managing the quality" specifically. Usually, the bigger the product (application/system), the more coupled it is - so it gets gradually harder and harder to "allocate" the testing to particular teams (while efficiently minimising the regression testing scope) because:
- system components can't be tested in separation and ...
- ... one component's failure tends to have cascading effects on others
In the end, a joint bunch of Testers frantically re-tests the ever-expanding piece of software in a way that significantly exceeds whatever has actually been changed from the code-base perspective (otherwise they can't guarantee nothing breaks - as they don't know the code-level dependencies & interactions).
Life w/o Testers (?)
That's why it's really uplifting that more & more companies avoid falling within that trap by putting a lot of (EARLY) effort in a proper functional (de-)composition & clear architecture boundaries beneath. This enables them to scale out using the "you build it, you run it" model (because of clear component ownership & its high individual resilience).
This makes hell-of-a-difference for ... YES, testing.
Having a component "runnable" independently by your Engineering Team means that all the activities necessary to be performed on that component have to be local to the team: development, troubleshooting, bug fixing, deployment, monitoring, support and of course - testing.
But "runnable" implicates several qualities that have to be met:
You can't run software component if you can't keep the quality built-in (change is guaranteed not to break the software) all the time.
You can't run software component if it's not testable w/o whole IT landscape of the company.
You can't run software component if for the sake of testers' work efficiency you need to pile up bigger chunks of changes (because you need to artificially inflate the regression tests scope).
All these require: very efficient knowledge exchange, very short cycle time, very tight feedback loop, very high level of automation, very direct test link to the functionality it checks. That's why the organisations eliminate Tester as a separate position - or rather - replace it with Testing as an obligatory skill of every Team Member. Which comes with certain advantages:
- having QA as a separate position encourages traditional "water-fallish" (sequential) approach & sticking to "local" responsibilities ("Dev done" VS "QA done")
- testing broadens the perspective of Developers (e.g., when it comes to usability, ergonomy & understanding of how that stuff is really used)
- the burden of responsibility for quality is a great motivator to use own development skills for automation ("if it hurts, do it more often"!) plus it makes you care for the actual "testability"
- testing is an essential skill for a Software Engineer - when not used ("I don't have to care, we have dedicated Testers ..."), it's prone to atrophy
No silver bullets though
Nevertheless, not having dedicated QA positions on-board has its potential negative consequences as well:
- QA skills have to be present (within the team) anyway - either acquired from the market or taught internally - that doesn't have to be easy: some people have the correct inquisitiveness, level of work organisation, attention to detail, ... and some do not
- dedicated QA positions guarantee that a piece of software is tested by someone else than its creator
- heavy-UI applications with a high level of expectations regarding the UX are inconvenient & expensive when it comes to automated testing; and the developers loathe manual/exploratory testing ...
Dev-Test-Sec-Ops-...
The more I think about it, the more interesting I find this No-Tester approach. It fits in well with the trend. Or rather on the intersection of the two trends:
- the 1st of them is breaking the walls between the specialties (remember the raise of DevOps?) & making people work interchangeably together - that's why separate "Testing Teams" are such an anti-pattern & you rarely see them these days
- and the 2nd is X-as-Code - contrary to the common belief, the expressiveness & verbosity of sheer code, plus its trace-ability of change and speed of execution make hell-of-a-difference for anything that has to be consistently repeated & kept in sync
But if there's one important remark worth remembering from all this consideration, I bet it's not the one about the optimal number of Testers (zero or more), but about testing as a very important, vital element of "runnability" definition. Labels of your Team mates do not matter much, as long as you're able to develop, maintain & deploy independently your pieces of the puzzle.