If it comes to decide between "build or buy", it’s usually a hard choice to make. Even if you pick the ready "off-the-shelf" product, it usually needs a lot of polishing to make it work. You can call yourself lucky if such adjustments can be made by changing the configuration or using SDKs - sometimes the product isn’t flexible enough to make it fit to your particular needs. It’s your processes / data that have to adapt. Hopefully it won’t affect your day to day operations …

What’s the reason to consider that right now? Because I want to share with you a story about keeping the threshold of how much you’re eager to adapt your environment to the product that will “support it”.

This particular organization was running a respectable number of parallel IT projects. Some of them lasted for few month, some had a lifespan of years - they were using phased waterfall (even if they were afraid to call it that way :>). They had two problems that are very important in this consideration:

  1. Their tests were manual all the way
  2. They didn’t maintain any kind of documentation (even requirements), because they’ve found that too expensive and burdensome

As a result - they had big problems with regression testing: keeping the test case repository, tracking test cases to requirements, monitoring test execution on higher (x-team) level. They were re-creating test cases every few months - each time less accurate and clear. But they’ve seen the issue and they were willing to improve they situation (good for them, some have huge problem with that), but their problem was that they …

… decided to use SIR tracker to fill the “gaps” I’ve mentioned above. That means that they want to treat:

  • requirement as a SIR
  • test case as a SIR
  • test case instance as a SIR
  • SIR as a SIR :)

It doesn’t make much sense, right? So, why did they pursue such a plan? Because of just 1 feature - direct references. They just wanted to be able to track the link between requirements, test cases and true SIRs, even if that means that:

  1. each “entity” will be processed as a separate workflow (sic!)
  2. as those entities (like: requirements and test cases for regression) live for a while, these SIRs will be “long-running” as well
  3. each entity will have a limited set of fixed fields,quite typical for SIRs…

My preaching had no to barely any effect - they were not able to see, that what they need is a versatile database with relationships - something that would be able to keep tons of information browsable on different levels of detail. Because the application they need is:

  • [Inventory (data-oriented)] - applications that are all about the data and it’s state (snapshot). It’s not really that important if you keep the history and how you do that - those applications are made to help you search, filter, drill and manage your information. And information is meant to be kept in coherent, non-multiplicated form.

They were not able to realize that the SIR tracker is:

  • [Workflow (process-oriented)] - applications that are all about aiding the process. Pure data isn’t their focal point. They are all about the flow - who is doing what on what stage. When you work in app like that - you’re mainly interested with your chunk of work and when it goes to someone else (so he can perform some kind of action) you lose it out of your sight.

What’s the real difference? Are theoretic definitions that important? If I add few fields to SIR tracker, I’ll have the information needed, won’t I?

Sure, you will, but will that be accessible enough? Will the user be able to reach what they need on 1 screen? In form that is concise and readable enough? That’s what happens when you don’t design applications by following the use cases / user scenarios - you come up with non-usable abominations (with proper set of data beneath ;P).

Share this post