This is part II. If you want to take a glimpse of part I (that mainly deals with the theory of coupling, you have to follow that link: http://gebski.tumblr.com/post/76150235763/the-silent-assassin-that-knows-no-mercy-how

Enough with theory, let’s check what can happen when coupling gets out of control:

Scenario #1: Build-time coupling

You’ve got a well set-up software factory. At least 20 of people constantly doing the in-house development work and (just to keep things simple) you’re constantly enhancing some software that’s offered later as an off-the-shelf-product(s). You’ve got a mature product offering, you’ve got some patterns, a lot of re-usable code, you’ve got everything put in a nice Continuous Integration setup. What happens if your artifacts are too tightly coupled in build-time?

  1. The proper CI has to reflect compilation dependencies, so your project compilation rules are split into tree branch-like tiers.
  2. Each tier (n) can contain only projects that are dependent on projects from tier <=(n-1).
  3. As a result - the lowest number tiers are usually your: architecture components & communication contracts.
  4. If you change ANY project in tier x, all the tiers below >(x+1) have to be recompiled: this means that if you want a proper Continuous Deployment (with continuous compilation, running automated tests, etc.) - each commit in tier X that belongs to so called “architecture” causes an avalanche of compilation and testing spree.

Is it bad? Well, not necessarily. Unless recompilation and re-running tests takes more time that passes between commits to those tiers. It’s really depressing to see that all your CI is 100% stressed because of totally dumb compile-time dependencies …

What happens then is:

  • you lose instant feedback ("you broke the compilation", "you broke the tests")
  • even if your change is meaningless for descendants (artifacts that depend on artifact you change), they have to be recompiled anyway - because such false positives can’t be distinguished by CI using any kind of tool available today
  • you keep generating new versions of dependent artifacts -> the dynamics of change may be a problem in terms of testability: artifacts change all the time (as you can notice - actually more frequently [!!!] than people commits in particular project)

Scenario #2: Run-time coupling

The enterprise company you work for has been growing for decades. So has it’s IT landscape - each in-house developed application really made sense and proved itself to be useful that much, that they were used in more and more scenarios. Soon, it has appeared that the power of mixing the services provided is even better - the applications started to call each other using synchronous services. It didn’t took much time and …

  1. Calling external services became an integral (and crucial part) of every service operation.
  2. Everything calls everything - if not directly, than in-directly.
  3. It’s not possible to abstract contracts for particular consumers - 85% of operations is the same, it’s 15% there’s always a bit different

Doesn’t sound like anything particularly good, but what’s the true effect?

  • Everything or nothing - once something is down, the majority of functionality doesn’t work and you will NEVER be able to tell which part exactly - this applies for outages and planned maintenance (like upgrades) as well
  • Nothing can be abstracted out and tested separately - it’s all about the context around: you need to test and release everything in the same time and the scope of regression tests is “everything”
  • Troubleshooting is like trying to find a way out of an insane maze of mazes (“yes, that doesn’t work, but what is the TRUE reason of this service failing?!”).
  • You can’t upgrade / improve anything that involves changing the contract - as it would require the changes in 75% of applications around

Hopefully you got the ideas. In next episode - how to solve avoid the coupling - the practical guide.

Share this post