TL;DR Software systems, like all other kinds of systems, should be evaluated not only in terms of fitting their purpose & technical excellence. No system exists in a void, they are affected by time, local & global conditions, market fluctuations (not only in market related to system's core domain!) - one can't foresee (& shouldn't even try) everything that can happen, but it's crucial to NOT build systems in a way that makes them tightly coupled (stiffly dependent on) to current "snapshot of reality" (context of the moment) or predictions that don't have any factual justification (but are based on common bias & beliefs). This is level basic. Level pro is to build them in a way that can TAKE ADVANTAGE of unexpected disruptions, discontinuities, leaps forward. That's what antifragility is all about ...
Warning! This post will be a bit "special" as the topic itself is special. Personally I find it VERY interesting & important, it's one of the things that keeps me quite occupied recently, but I also realize that for many it may sound very abstract & ephemeral. Theory of (anti-)fragility is not something we're taught at schools / universities, it rather doesn't pop up during teams' dailies. I think we share similar observations (at least some of them), but use different names for them & usually do not give it much thought - it may be an mistake, so I'd like to correct that, i.a. by writing this post (actually: series of posts as this one is just an intro).
If you want to learn more about the theory of (anti-)fragility, there's not better way to do it than read Nassim Taleb's "Antifragile" - my review can be found on Goodreads. If you already know "Black Swan" (Taleb's most famous book), "Antifragile" takes it from there & pushes the considerations even further, actually in the direction I find more practical & applicable in our everyday lives.
We build good software & bad software. In fact reality is not that binary, bah, actually our software's quality can't be even described with a continuous spectrum of singledimentional values between "good" & "bad". Its assessment is multidimensional & depends on varying criteria with different "weights": robustness, performance, usability, etc. This is the bread'n'butter, something (almost) everyone perfectly knows.
But there's another important criterion that has popped up few years ago. At the first glance it seems to overlap with some existing, well-known ones, but if you dig a bit deeper, there are some non-trivial differences. I'm speaking about Antifragility, term introduced by Nassim Nicholas Taleb - widely known & respected low-probabilities event scholar, author of famous "Black Swan".
What's fragility / antifragility
What actually does "antifragile" (and "fragile") mean (in context of any system, not necessarily a computer one)? Let's ask the author of the concept himself:
"Simply, antifragility is defined as a convex response to a stressor or source of harm (for some range of variation), leading to a positive sensitivity to increase in volatility (or variability, stress, dispersion of outcomes, or uncertainty, what is grouped under the designation "disorder cluster").
Likewise fragility is defined as a concave sensitivity to stressors, leading a negative sensitivity to increase in volatility. The relation between fragility, convexity, and sensitivity to disorder is mathematical, obtained by theorem, not derived from empirical data mining or some historical narrative. It is a priori."
Oh well, I didn't say it's an easy concept. Let me try to rephrase that with my own words then:
A fragile system is oversensitive to even small variations from (what is considered) "the norm" (snapshot of reality that serves as its contextual foundation). An unexpected change that was not foreseen (in up-front "design") causes disproportionately large amount of trouble. In other words: "fragile" system is not change-proof, future-proof, it's designed to operate in very precisely specified (henceforth probably narrow) context & it breaks (or at least encounters significant trouble) once it changes.
Fragile systems are the subjects of intensified (& usually non-linear) entropy (unless you can guarantee invariable conditions - can you?). What is very interesting, it's not only the generally negative change (like worsening market conditions) that causes trouble -> "fragile" systems are "allergic" to any change.
You may think that by pure contradiction antifragile systems are simply resistant to change - not affected by evolving conditions, immune to the impact of time / market / expectations, etc. But no, that would be a definition of resilient systems.
Antifragile systems are something much more - they embrace the change, they welcome the disruption, they don't try to pretend they can predict everything (or anything ...), but they are flexible & open enough to treat every mutation as an actual OPPORTUNITY to gain something. Figuratively speaking, antifragile system doesn't break when the tide comes & hits it, it harness & ride it like a boss.
Again, remember that by "system" I don't mean the "computer system", but a term from the systems theory.
OK, I guess it's the highest time to get back to software development. What are the examples of fragile, resilient & anti-fragile systems (/ processes / practices / methods) one could easily recognise?
Next post in the series (on examples of fragile systems in software engineering) can be found here.
Pic: © Pavel Mastepanov - Fotolia.com