Don't tell me you haven't seen these job ads:

  • "Looking for Test Automating Engineer. Required skills: Gherkin, Cucumber & Spock Framework."
  • "Offering a position for Test Automation Architect, 3+ years of front-end web application test automation required."
  • "Open spot for Lead BDD Ninja! You have speak fluent Selenium-jitsu, Jasmine-do & Karma-fu."

Clearly, enterprises have learned about the benefits of test automation (not a surprise, keeping in mind the endless swarms of Agile evangelist roaming around freely ;>) & are struggling to apply the practice in their own, specific reality. In their own, twisted ways ... And it appears that:

  1. it's not that easy to start test automation when 95% of stuff you deal with is legacy ...
  2. until you get to the point of noticing meaningful benefit from introducing automated tests, majority of people already get tired & frustrated due to lack of tangible improvement
  3. test automation "tax" is hardly predictable & even less transparent (no business person can tell whether devs are really writing an automated test or maybe they are just fiddling around with something)
  4. promised business functionality won't just deliver itself in the meantime -> the most experienced & skilled devs are primarily expected to deliver direct business value, not write tests for something that has already been developed & delivered

Fortunately, these obstacles do not discourage enterprises entirely anymore. Unfortunately, many of them decide to deal with them by cutting corners ... Sadly, a lot of potential test automation benefits evaporates in the process. One of such regrettable (IMHO) short-cuts is introduction of 100% test automation-dedicated role named ...

Test Automation Engineer (TAE)

Sometimes called "automated tester" (which sounds really bizarre). It's a new role that keep sprawling amid teams - sort-of-a-programmer who's primary duty is to not to create new functionality, but write automated tests, all sorts of them:

  • in code & in separate, dedicated tools
  • lower (unit) level & higher (scenario, integration, E2E) level
  • using natural language specifications (BDD, ATDD) or straight in code
  • for old & new stuff likewise

Thanks to dedication & sacrifice of these humble people, "standard" developers don't have to bother themselves with such a mundane activities as writing all these pesky tests, they can go full-speed-ahead with well appreciated business functionality, while TAEs strive to keep up with their pace in the meantime.

In theory, it kind of makes sense:

  1. devs & TAEs can scale independently
  2. TAEs can specialize in dedicated tools that are not that amusing for standard devs (like all these super-expensive enterprisey test creation suites I'd put straight into garbage ...)
  3. isn't testing supposed to be performed by different people than the ones who've did the development? it was always one of the core QA principles ...

Sadly, this way of thinking is severely flawed & tends to cause very serious problems ...

  1. getting automated testing duty of the "standard" developers (by making it explicit responsibility of TAEs) makes an impression that quality is not their main concern anymore; justifying it by saying "automated testing is someone else, but you still are supposed to do manual testing" makes things even worse ... like 100x worse ...
  2. well done testing can be a great design validation aid (like in TDD), do you really want to get rid of such a nice tool?
  3. if tests (once-written) break, whose responsibility is it to fix them? TAEs? Developer's who broke the test?
  • if it's the developer - wouldn't it be in his best interest to fix stuff by introducing the most crude & blunt change to the test(s)? Tests & their maintainability are on the TAE anyway ...
  • if it's TAE - how long can a person stand the fact that someone continuously breaks their work (w/o caring) without inflicting physical violence, hmm?
  1. well written tests are one of the most superior forms of living documentation, but to make this work, one has to really know all the ins & outs
  2. tests should be integral part of functional code, fully synchronized & committed together with it - otherwise tests will flicker, "5% red" becomes a normal state & "broken window effect" builds up

Test swingers

It doesn't mean that it should be only X who creates & maintains tests for X's code. Team-level cooperation & duty split is more than welcome, but there are far better ways to split the work w/o having completely different roles for that:

  • one developer creates interfaces (specifications) & builds failing test cases against these (not yet implemented) contracts; the other one's duty is to write working implementation - and hereby both validate the design & turn the tests light green
  • basic idea described above can be applied also in BDD (one person writes just natural language specs - happens quite often) & in pair programming
  • issuing a ticket in a form of failing automated test (when dev from team X issues a ticket for the product of team Y) is also a common (& welcome practice, well maybe not as welcome as "fix it yourself & send PR ;D" one)
  • yes, there are people who are Natural Born Testers with nearly magical ability to break things :) but their super-powers manifests mainly in manual, exploratory, insightful testing

Bread'n'butter

It's 2016 already.

Ability to automate testing ought not to be considered an exceptional, supplementary skill. Or even worse - alternate path of developer's career. It's a basic, standard skill EVERY developer should possess. EVERY one. Regardless of types of created applications, used languages, framework or platforms. It's as elementary ability as being able to debug & troubleshoot code (sadly, I've met too many developers who've somehow lost this competency ...).

It would be great if instead of building artificial divisions & cherishing evident gaps in knowledge & skills, people have focused on eliminating them instead.

Pic: © p!xel 66 - Fotolia.com

Share this post