Some time ago I had an interesting discussion with one of my colleagues regarding what is the typical thing that goes wrong when software delivery project turns out to be unsuccessful. Many of the reasons we've identified were not really revealing:
- new, unknown, bleeding edge, unproven technology - comments are unnecessary, but let's assume we play safe (just for sole purpose of this article)
- wrong product built - something that is not really needed, doesn't fulfill true needs, etc. - it's a very important point, but let's agree that we focus here just on the technical delivery process, not actual value (it's a seriously limited perspective, I realize that)
- assumed delivery date has been exceeded (delivery takes more time than assumed due to numerous reasons - starting with bad estimation, ending with bottlenecks & staffing issues)
- there are quality issues (again - of several different types: insufficient performance, too low availability, business scenarios can't be fulfilled, data loss occurs, etc.)
Not really surprising stuff, is it? Anyway, in such cases "unsuccessful" means that usually that the whole endeavor is over budget (completing stuff / fixing issues takes time & money) - it can be saved with more money (and/or time). Even if there's something utterly broken (wrong architectural assumptions, etc.) & some re-write has to take place, I can barely imagine a situation so bad that product can't be saved with some smart re-factoring, so it has to be completely re-written, etc.
You could interpret that in a following way:
delivery will happen, sooner or later, it's just that some products are delivered efficiently (fast) & some are born in pain (slower) - but in the end, all of them will eventually get there (unless they are cancelled in the meantime, of course)
If you look at it from pure software development perspective, it makes a lot of sense: e.g. if you're building a typical JavaScript-inflated web MVW application with lightweight services + RDBMS storage you may make some mistakes, but you'd have to suck big time to not be able to develop such an application at all.
So it makes sense, ...
But it's wrong!
Zero (or even negative) velocity projects happen. Fruitless. Literally no progress at all. Not convergent to set target. Like burning money in the furnace, throwing them out of the window of speeding train or draining down in the toilet.
So when can that happen? Let me describe just the few most typical scenarios:
-
Each delta of work causes required refactoring bigger than this delta (on average) - that usually happens when the architecture is very crappy, with tons of dependencies & important decisions are being re-visited & changed frequently. Team(s) keep changing stuff for nth time, but each (even small) change causes an avalanche of other ones. Each consecutive change is even worse, because you hit more & more limitations & create more & more exceptions ("ifs" & other special cases) that are even harder to maintain (& remember, when having to make another design decisions). Loop of death. Been there, seen that.
-
Done is never done (building up an “onion”) - usually due to open & unclear boundaries of features / user stories; backlogs swell from day to day - more new tasks appear that get concluded, BUT they are still about the same conceptual pieces of functionality (features) - list of tasks within particular feature changes so frequently that Product Owner / Sponsor / Analyst never knows when will particular, desired effect be achieved. Well, it will never be.
-
Excessive work "inventory" with delayed feedback - developers create a huge increment, but due to faulty process set by people deprived of imagination and last remnants of common sense they don't get any feedback until last commit is made. What comes next is the short & intensive storm of disappointment and failed expectations, concluded with humongous tide of change requests turning everything up-side down again, comparable (scale-wise) to the initial increment. Not without impact on the motivation and enthusiasm of developers, of course.
-
Blindfold development where transparency doesn't happen - everyone is extremely busy doing something, maybe even something that makes most sense in his/her opinion, but there's no common, shared vision, communication barely takes place, priorities are never being aligned, teams work in separation - without any urge to integrate & synchronise. No-one claims responsibility overall this mess. All run as fast as they could, but in the different directions ...
-
Galloping on the dead horse - whole organisation has lost the ability to delivery ANYTHING. Field-level leaders & other smart dudes have left, culture is corrupt & very restrictive, higher level leadership has lost its credibility and respect among the crew, 95% of people (all but the idiots) are investigating exit options. Contrary to the previous scenarios, real work doesn't even happen here, everything is a fake.
Let's stop here. There are some more scenarios, but I think you've got the idea already. Why did I bother to write all of that? Unfortunately, some less experienced (or too much content-free) managers tend to think that software delivery is a piss-easy crude & straightforward stuff where you can compensate all the mistakes with scope cut / death-march / hidden contingencies. Which is bullshit, of course.
In fact - you can be spending tons of money, have everyone running like crazy, but literally make no progress at all - without a slightest hope for the final success. And due to ephemeral nature of software, it's not that trivial to recognize the symptoms & discover true state of project. In such cases, courageous & (sometimes) drastic changes are A MUST - the sooner, the better. Hoping that the situation will fix itself, "project will eventually get into motion", etc. is a humongous mistake as the situation is quite likely to get even worse.