Software is similar to exactly nothing we had in this world until it has emerged. Ephemeral, not physical but with constantly growing physical indirect effects on our world. Software development has started as a small, isolated industry (focused on actual computing) but now it's powering up almost every other industry (and the differences in the way of powering may be very significant).

Software is everywhere. And if it still isn't, it will be very soon.

Flannel-collared crime

OK, that's all more than obvious, but there's a particular related sub-topic that's rarely being covered in full depth: the accountability & implications of failure in software development. In general - it's nothing new: different applications / services serve different purposes & due to bad mojo (bug, faulty design, security breach, etc.) things may go very ugly:

  1. there may be financial consequences (incl. image / reputation loss) - from small ones (like non-processed retail transaction or two) to not-so-small ones (mistakes in multi-million settlements, bad pricing for vast volume of transactions affected, unavailability of business critical service for large enterprise, etc.)
  2. there may be legal consequences (protected data leak, strict regulatory requirements not met - i.e. stock market domain specifics)
  3. and at last but not least (here comes the whole point of this blog post) - there may be consequences related to human health or even human life & death

Uncle Bob has elaborated on some of these during his keynote at Build Stuff 15 & it has made me thinking. Everyone has heard about cases that belong to categories no 1 & 2 - even about big, stable companies going out of business due to financial losses caused by bugs / data leak issues. But ... it's not that hard to imagine bug(s) is software responsible for:

  • traffic lights coordination
  • aided landing systems in aircrafts (ILS)
  • nuclear power plant's maintenance (or any other plant's TBH)
  • damn, even ABS or any other driving aid system

Yes, things may get bold. Let's face the facts: sooner or later a tragedy will happen due to programmer's actions - intended (terrorism, sabotage, felony) or not (ordinary bug). If, for instance, one person dies, it's a big chance that the public opinion won't notice. But what happens if 5 do? 10? 50? 200? Can you see the headlines already?

  • Untested "Pull-the-trigger" Request caused 10 casualties.
  • Software factories of Death in XYZ collect a bloody harvest.
  • Popular library jQuery suspected of being the main cause of the recent tragedy. Arrest warrants for GitHub repo maintainers have already been posted.

Regulated industry?

Uncle Bob has clearly stated that our future is determined: software development will become a regulated industry. Once first tragedy happens, governments will feel obliged to prevent it from happening again by forcing strict & precise norms / rules & regulations to assure proper level of quality in software.

I dare to disagree (with both Uncle Bob & the governments likewise ;>):

  • software's too intangible to be controlled in that way - it has much more unclear boundaries, temporal character (take processes) & in many places it's hard to "place" it. What is more - it exists in so many shapes with blurred definitions (services, applications, processes, ...) that any form of forced regulation can be easily avoided.
  • obviously there may be an attempt to force the only allowed & permitted "boundaries" & definitions ("since today, JSON is not allowed anymore", "to create a new JavaScript framework you have to apply in Global JavaScript Registry", etc.) - but can you seriously imagine that? Or rather - can you imagine this being effectively respected / controlled? I can't.
  • there's always the temptation to regulate process / people instead of deliverables - but the idea of "programming license" (as an analogy to "driving license") has never been further from making any sense (keeping in mind not only depth but also width of our profession)
  • regulating just the most crucial systems / services makes some sense (well, they are usually already subjects of some regulations ...), but less & less of it, as services become more & more interconnected & dependent on each other -> I can easily imagine critical system performing a harmful action due to getting erroneous information from commodity system

Can industry do something about it?

When I was a kid (coding kid ofc), I had high hopes of programming tools of the future: languages, IDEs, build systems. I expected them to improve not just in terms of looks & widget-creating wizards - I wanted them to be smarter. I believed in idea of componentization & RAD (rapid application development - not the crappy method, but the paradigm itself) - that one day we'll be constructing software with composable, reliable & well-bounded "bricks"; in a visual, bug-less, natural manner.

But it seems that my hopes were vain - the only smarts acquired by dev tools are about LINTing, code completion & pre-made inspections. We've almost got rid of unmanaged memory management (for a price!), but there are still many other traps lying around. What's more, software gets more & more distributed - a great challenge itself.

So - my answer is NO. We (as an industry) have no silver bullet for a significant quality improvement of the software we create. TDD didn't do that, either did Continuous Testing, FP & immutable state won't save the Earth either - and we don't have anything groundbreaking around a corner AFAIK (don't mention the proposed oath, please ...).

Obscurity by complexity

What then?

I'm not sure whether it's really soothing or not, but ... IMHO the domain of this problem will blur it out. Let me illustrate it with an example:

  1. When a typical car accident happens, "troubleshooting" is quite obvious - all pieces of the puzzle are physically there, with full access & possibility to get direnctly inspected. What you have to do is to re-create the course of events & identify causality relations.
  2. On the other hand, imagine how would it look alike if "software" is suspected - knowing how complex & cumbersome troubleshooting (isolating, reproducing the problem) can be ... The number of factors to be taken under considerations (except the most obvious cases) is huge enough to undermine & bog down any proper analysis.

What does it mean? It means that quite likely people die because of software errors already. But these erroneous software interactions are either discreet, indirect, hard to distinguish from normal behavior or just hard to associate with software (YET!) that no-one makes a big issue of of that.

One day it will change.

Pic: © Gizmodo - Fotolia.com

Share this post