A short conversation I had today:

Me: How are we doing with X? What's the current state - how far have we gone & what's left to do?

A: Good news - we're pretty much done & it's a clear success. All the functional & performance tests are done, stuff is successfully deployed to production, all the necessary info has been distributed, access rights have been granted ... Just some finishing touch and we can proceed to Y, there's some interesting stuff to be done there as well.

Me: Neat. What can you tell me about the adoption? Do we have the actual usage: how many entries have been put in, how many users log in daily, do these numbers increase?

A: Huh? Dunno. Software is deployed & ready to rumble. We've done our stuff, right?

Ermm, no. Not at all.
On time & on budget doesn't necessarily equal SUCCESS.

Sh!t in, sh!t out

Building IT product / service isn't just about coding and / or deployment of a software package - based on carefully noted down wishlist. Before you actually start the code work you should already have a clear purpose of WHAT you'll be doing and WHY you'll be doing. That should be clear - it feels like I've been writing million times about that as well.

But there's also the 2nd side of the equation - once your product (hopefully - an MVP, not a leviathan that was developed for the past 3 years without any partial release) is ready to be used, it has to be validated whether it really meets what was initially expected:

  • does it add the assumed value for the ones it was supposed to aid?
  • can this value be reasonably quantified (tbh, this question should be answered beforehand) ...
  • ... and if so - does this value justify the cost(s)? Does it utilize the potential being exploited?
  • don't just get snapshots of data, trend analysis (the change in time) is even more important


There are very few things worse for IT dev squad than producing a product / service that won't even be used - straight from development it goes "on the shelf" to remain that forever, because:

  • it's unusable
  • it can't be adapted to reality that has changed during the development
  • it doesn't fit the organization processes / adhere to its standards
  • it requires additional resources that can't be obtained
  • whatever some executive came up with

I know what I'm saying - it happened to me few times as well (honestly, the last one was not that long ago). The effect on the team is pretty much obvious & I'm not going to elaborate on that. What I want to say is that it's a waste of money & effort - such one that can't be justified with:

"Hey, business people wanted that & we're here just to do what they want of us. And we've delivered."

Speak up. Dispute & challenge idea. Constructive criticism FTW. If the organization isn't mature enough, shame for them, poor bastards - at least make sure you've tried & close the door after you burn your bridges ;>

And if this stuff can't be measured ...

... then:

  1. either you're doing something terribly wrong
  2. or you're having a volatile, intangible target that may not be worthy of pursuit
  3. or maybe you're trying to approach the problem incorrectly (happens now & then, don't worry)

Honestly, since I've started my professional work in IT in 2001, I haven't done a single project with the final outcomes that couldn't be measured / evaluated against assumed targets.

Share this post