(...) Economies of scale are the cost advantages that enterprises obtain due to size, output, or scale of operation, with cost per unit (of output) generally decreasing with increasing scale as fixed costs are spread out over more units of output.

src: Wikipedia

This sounds so reasonable & generic, it's has been also empirically proven for traditional (related to "physical" goods) industries. No wonder that this universal law is being solemnly applied in building software products:

  • when things progress too slow, add more developers
  • to build a new product, form a large-scale project
  • to increase testing efficiency, increase release cadence
  • make whole enterprise use the same technology & follow the same architecture blueprint, to increase component re-use

These are all great examples of valid, meaningful optimisations that ... oh fuck, wait ...

Except they are not

Yupp, indeed - building software is NOT susceptible to economies of scale. Completely the opposite (& it's one of the key reasons why so many managers w/o technical experience lead software projects straight into disasters). Everyone with at least few years of practical experience in the industry felt it the hard way (perhaps even several times) - but we just don't phrase it that way (so directly, flatly & unconditionally). I didn't come up with the reference to economies of scale on my own either, I think I've heard it from either Dan North or Kevlin Henney.

I'll save you presenting all the rationale behind that statement - there are already tons of posts (including my own) about them: inevitable (#giggle) tech debt, exponentially growing dependencies mesh, communication "tax", overconfidence in componentisation (which is a LIE), technical VS business complexity, etc.

No shortcuts

For the same reason (avoiding to be Mr. Obvious ...) I'll skip the details of the proper way to decrease "cost per unit (of output)" - like: effectiveness > efficiency, limited coupling (but in equilibrium with coherency!), cleaning your shit first & building overall technical excellence (throughout whole delivery process).

Instead, I think it's worthy to dispel some (common) myths that are related to (dis)economies of scale in software delivery:

  1. Cheap "outsourced software factories", "high performance delivery centres" & other such stuff do not exist. Or rather - they exist & maybe they are cheaper than on-board, local development, but cost is lower not because they utilise economies of scale ... No, they just produce low quality crap, they pay miserably (so they have a huge turnover, which doesn't help with the quality either ...), they have wrong focus (ignore maintainability as it's not their concern), etc..

  2. Lower costs due to so-called "componentisation" (or "asset re-use") is in 99% a big LIE (unless you've build truly highly customisable components - which is a big investment & HAS to be planned up-front). In contrast to more traditional industries, software components have significantly more "dimensions of interaction" - it's not only that a design of re-usable component is complex & takes a lot of effort, it also generates a lot of additional coupling, which increases maintenance costs of whole solution.

  3. Large, centralised projects (/programs) have absolutely NO advantages over iterated, short-looped product development, except for ones that may be results of your organisation's internal mess (e.g. power struggle between executives, dated budgeting approach, etc.).

Small is beautiful. And sensible. And less risky.

Pic: Daniel Cheong / Flickr (Burj Khalifa)

Share this post