To be honest, I haven't heard about ApprovalTests library until just few days ago, when I've spotted a new course on Pluralsight - Approval Tests for .NET. But the concept seems interesting enough to catch my restless attention for a moment, so here comes the blog post ...

So, what's AT about? AT aims to replace typical asserts in automated tests with somoething more clear, transparent & human readable. A test that uses AT:

  1. Performs some short scenario
  2. Dumps the results (serializes in any way you prefer ...)
  3. Compares whether the results are different to what you've "recorded" during the proper run - yes, it's your (developer's) decision to point out the result of a proper, faultless run

Voila! If the idea seems odd & counterintuitive to you, check the tutorials here - it would most likely make more sense once you do that.

What's so special about that?

Well, quite a few things:

  • by serializing the results you man mean whatever you want, for instance:
    • overriding toString to extract just the most meaninful data
    • generating an HTML view (!)
    • making a screenshot (!!)
    • even recording a sound message (!!!)
  • you can't create tests up-front (well, at least not the assertion part, but you can prepare whole scenarios) - to actually make this approach work, you have to "manually" approve the dump: then you're validating re-runs against what you've originally approved
  • you have to be really careful about what & how you serialize - if you:
    • serialize too little - tests will not validate the actual correctness
    • serialize too much - tests will announce fail in case of meaningful changes - imagine the comparation of WPF screen dumps or web app html sections

The reason you'd do that for

Doing something just because it's different ain't the proper purpose, so why to try AT at all?

Having your code automatically tested is great, but even if (as a dev) you're fully convinced that your tests are good enough (adequate coverage & quality) to detect regression, no artifacts should be released to production without being approved / accepted by bussiness owners / sponsors / users. It's them who you HAVE TO convince as well.

And they are usually quite bad at reading through code, regardless of whether it's functional code or test ...

BDD / ATDD is trying to fill that gap & AT tries to achieve the same, but it follows a bit different path: instead of trying to expose the simple, readable & credible (for the sake of transparency) test structure, it visualizes the assertion as a comparison of two human readable portions of information.

What am I missing here?

AT doesn't work alone - it's very transparent & readable in terms of presenting the expected & actual test output, but it's not enough to make tests credible in eyes of business people. They have to be 100% assured that tests:

  1. cover all the critical scenarios (there's no indication what does test actually do in AT, just the end result!)
  2. cover all the meaningful decisions paths within a scenario
  3. do exactly what they are supposed to do, based on test case description

BDD/ATDD do well with #1 & #2, but they completely suck at #3. AT shines at #3, but doesn't support #1 & #2. Fortunately, there's not reason why you couldn't merge those two techniques: test structuring using a ATDD test library (like my favourite SpecFlow & test assertions using AT.

Honestly, I haven't tried that yet, but this idea seems just to good to skip such an opportunity.

Share this post