Let me tell you the short fairy tale I've heard last week:

We've started development in our tech debt elimination project a few days ago. There's so much complicated work to be done & it involves so much testing effort that there's no other option but release everything altogether in T months (T > half a year).

It makes even more sense if you take under your consideration, that some of this stuff involves changing the actual technology beneath - we're substituting code in X language/platform with code in Y language/platform.

This was supposed to be the best, safest, the most stable & the less risky approach.
IMHO - rather the biggest misunderstanding or the most spectacular bullshit :)

... but we need our 'architecture' re-written & it's used everywhere ...

Sooo, you're telling me that you will be enhancing / re-writing some kind of 'architecture' & you won't know whether it works until everything runs on that? What kind of 'architecture' is that?!

Keep it simple, code only what's necessary to run - prove it immediately by rea-life case usage. Dogfeed if needed. Designing & creating a big BFUD-powered 'architecture' fails in 95% of cases.

... but our stuff is so tangled ...

Wait a second. What you've got is a big ball of mud & given an opportunity for a refresh, you're planning to end-up with a big ball of mud ported 1:1 to another platform/language?!

What about putting real effort instead of mechanical translation to actually remove some technical debt? Just use the freaking shovel method & scrap a bit after a bit, making sure that you keep:

  • domain structures' cohesion high
  • coupling low
  • boilerplace as simple as possible
  • no redundancy

... but how can we test it if it ain't complete ...

LOL (the sad, nervous, concerned, wrinkled kind of laughter thought ...)

Code that requires full, manual regression testing of whole module / application / system is a burden, technical debt's horn of plenty - you can accept that & do nothing about that or just start fixing things. There's never a better time for that then "now" - use every refactoring opportunity you can to tidy things up.

... but business people don't want to test it until it's 100% done ...

Well, hard to blame them - they will put some effort only if you convince them that there's something in that for them: that's one of the key reasons why you should always aim for adding business value in any piece of code released. The other option is to automate comparative testing on old & new pieces of code (now I refer to moving from X to Y), but in some cases it may be really tricky.

... but it may be unstable & unreliable until we move everything ...

There's another visible proof that:

  • you're not putting enough intellectual effort in the proper design of outcome architecture
  • the level of coupling in the to be application architecture is beyond acceptability threshold
  • your transition approach is faulty

... we're doing it step by step; first whole codebase from point A to point B, then whole codebase from point B to point C, ... and we're there, voila!

You need to have a really good excuse for that model. Why? Because:

  1. the intermediate steps usually give no value, but being the step towards the target
  2. your whole codebase is "not ready" - until the very end: you get no feedback on your overall approach until then (does it scale? is it error-resilient enough? etc.), there's no guarantee you won't need additional steps, etc.
  3. in every step you modify whole codebase, everything - if you're lacking automated regression testing (and as you're doing with legacy, most likely you are): that's a waste

See? Cheap excuses

Longer release cycles may sometimes feel appropriate, but it's usually because of either mental lazyness, very poor quality of the initial codebase, overall lack of the ownership on codebase or limited vision on the approach (we'll work something out once we've already started).

Shorting the release cycle may cause you to perform the additional activities (& increase the amount of work), but it will pay off very soon - such an approach will force you to put an effort in refactoring the architecture towards lower coupling & automated components' testability and that will result in:

  1. Lowering the failure risk
  2. Being able to deliver the new functionality more frequently
  3. Decreasing the range of regression / acceptance tests (& finding out the real boundaries for that)
  4. Reducing the cost of regression testing
  5. Removing the dependencies between the development lifecycle of particular modules

Looks compelling isn't it? This is what tech debt elimination is about - not just moving code-shit-stuff-base from language X to language Y.

Share this post