Just some random thoughts / stuff I've bumped into recently. Enjoy.

Interweave tests in support

I love this legacy code rule (it's supposed to help fighting the technical debt):

When you touch any existing piece of code, always leave it in at least slightly better shape than it was before.

It makes sense, sounds golden, but it's quite intangible (or ephemeral) by its nature. What does it mean "slightly better shape"? ReSharper's auto-format is just fine? :P Fortunately, there's another rule - it's quite ancient, but someone has reminded it to me quite recently (for which I am very grateful):

Before you fix any bug you're on, write a test. A test that:

  1. proves that bug exists (by failing)
  2. proves that bug is fixed once you do it (by passing)
  3. prevents that (or similar) bug from appearing again one day in future (regression)

Capiche? It's that freakin' simple. Far more tangible & far less subjective.

Black magic is bad

When I've discovered scenario (E2E) tests in SPA web development, I was delighted. It fit so great with BDD & made me able to compose tests on the level of functional model quite easily. Toolset based on Selenium & WebDriver seemed like a silver bullet for all my problems. Unfortunately, in the end it was causing more problems than it solved before ... It seems like a topic for a full blog posts (or even two), but I want to mention one thing in particular, that may look very controversial at the first glance:

As devs moved to test behavior by examining changes on output DOM level, whatever's beneath becomes more & more black magic for them. Things like digest loop (in Angular) or view lifecycle (in Backbone) are barely understood - "they are ... non-deterministic & asynchronous", "things happen in random order", "you never know when stuff is done".

In theory it's fine - in your test you should not test the way code works (implementation details), but the expected outcome. However, not knowing what's actually going on, may lead straight to pathological behaviours, like fixing bugs by adding dirty-hack-overlay-workaround as a cherry on the top.

STOP. Just don't.

If you don't understand how the code's running, help yourself with the tests: instead of high-level, abstract, DSL-alike E2E tests, go for simple (but blazing fast) component tests. There are plenty of pros: you don't need sophisticated tooling for that, they are far less brittle, they help you to validate your low level assumptions on what's actually happening 'under the hood'. And at last but not least - they increase your understanding of frameworks'/libraries' internals - which is priceless.

Async tests using Jasmine 2.0

Since 2.0 of Jasmine (that has appeared about 2 yrs ago, so it isn't the freshest thing around, mmkay) all major JS test in-browser runners have async code testing support built-in. USE IT. BEFRIEND IT.

It's extremely simple and it works in both tests & set-up/tear-down functions as well:

describe("lets do something ...", function() {  
    beforeEach(function(done) { 
      // async initialization
      setTimeout(function() {
        ... // do something
        done(); // <-- explicitly let know that init is done
      }, 100);
    });

    it("... wickedly asynchronous", function(done) { 
      callWithCallback(... , {
        success: function (result) {
          expect(result).toEqual(sthng);
          done(); // <-- explicitly let know that test is done
        },
        error: function() {
          done.fail("something went terribly wrong ...");
        }
      });
    });
  });

Ignoring the fact that at least part of your code runs asynchronously (& honestly - can you deny that?) will cause your tests to flicker & degrade any value they could provide (more & more over time).

Spying your tests

More & more people use mocking in JavaScript. Because it's easy, because it helps to reduce the test context & because there are some tools that make simpler - for instance: auto-mocking in Jest or general-purpose mocking library named Sinon.

However, there's one very interested feature provided by libraries like Sinon, that doesn't get as much love as it deserves: spying. People do stubbing / mocking / faking & all other type of playing with returned results, but they keep forgetting about spies - which are the best (together with adding ancillary event handlers in event-driven applications) way to learn what's going on inside application that wasn't designed to be 100% transparent & testable.

I'm not saying that you should use spies to track implementation internals - that would be both stupid & prone to breaking changes in future. Go for external, public API instead; you can find some nice examples using the following links:

Pic: © wenchiawang - Fotolia.com

Share this post