TL;DR Good engineering requires high awareness even in cases that may appear basic & straightforward. So-called good practice, if overdone or applied incorrectly can be very detrimental & in the same time may appear completely innocent & harmless, making us look for problems' root cause elsewhere. Using DI/IoC containers is a great example here - when misused, instead of reducing the coupling & increasing the testability, they can over-bloat code, make it far more brittle & harder to grok through.

Some time ago I've already committed a post about cargo cult in software delivery. It was mainly about copying patterns / tools & methods used (with success) by our industry giants, without really understanding whether you're facing the same problems in a similar context.

Today I'd like to cover another form of cargo cult - indiscriminate (over)use of certain code-level practices, usually in not 100% good way, w/o fully understanding their intended purpose & expected advantage to be gained. Why people do that? Because everyone else uses them, because they are in general considered a good practice, because the use case has "skewed" & degenerated over time, etc.

Examples? Omnipresent ORMs, code coverage worship, pre-mature or overzealous monolith decomposition (microservices everywhere!) & especially DI container extravaganza. Let's stick to this last one, as it's an illustrious case ...

DI(Y)

Vast majority of programmers have heard about key advantages of using DI/IoC thanks to SOLID principles. In short words:

  • reduced coupling
  • dependency late binding
  • increased testability due to swapping dependencies with mocks/stubs

But not everyone really understands HOW & WHEN to apply this technique to truly achieve expected goal(s) - consequences tend to be quite grief, even if not strikingly visible since the very beginning:

  1. Referring to DI container effectively replaces all object instantiations ("DI is a new 'new'") - hence making code far more complex & verbose.
  2. Dependencies are not visible in call hierarchies (& can be deeply nested), what actually makes code more concise, but also harder to track (e.g. object life-cycle), interpret & trouble-shoot.
  3. In-code interface-to-implementation bindings require referencing assembly with the implementation (compile-time coupling); out-of-code (convention-based, config-based) bindings are much more error-prone (it's just run-time coupling, but actually quite tight)
  4. Missing dependencies (while resolving) can wreak havoc in run-time, especially when race conditions (between registering & resolving) are in-game

All these disadvantages are even more visible (& painful) when devs overdo the modelling by introducing too many "helper abstractions" - you can easily recognize them by suffixes like "-Manager", "-Controller", "-Engine". These outbreaks of unintended (& unnecessary) coupling tend to spread like virus through your codebase & cause massive DI containers overdose.

How to keep your DI/IoC sane then? What are the good principles to follow?

Secret sauce of good DI

OK, let's try to get there - step by step. Or rather - question by question.

Q1: How does in fact DI help with loosening the coupling?
In two ways:

  • because it forces consumer code to depend on the interface, hence hiding implementation details from consumers
  • because it allows consumers to don't care anymore about instantiating details - consumers just asks for a dependency & gets it w/o knowing whether it's static or per-request, what parameters/logic are needed to construct it, etc.

Q2: Is DI container required if we want to depend only on interfaces?
Of course it isn't! You'll be perfectly fine ol' good GoF creational patterns like Abstract Factory, Builder or even Prototype: this is (& was always) their intended purpose - to encapsulate & contain complex (& in some cases polymorphic) object hierarchies construction.

Q3: Thanks to DI I don't have to pass my dependencies through whole hierarchies of calls - how can I achieve that w/o DI?
But what is the usual reason for that? It's because you're spawning too many abstract "helpers" (mentioned above) OR because you're (usually unnecessarily) mixing behaviour with state in such helpers (I don't mean domain objects, which are usually the lowest layer of calls' hierarchy). I don't want to sound like an FP zealot, but what usually helped in my cases was to:

  • contain bounded context business logic within domain objects of this bounded context (so repositories & factories deal with such object life-cycle, not a DI container)
  • in higher-order processing layers (e.g. service gateways, APIs, BFF service layers) keep only domain object transformations (merging, filtering, sorting, etc.) & calls to business logic within these domain objects, so such layers can be devoid of any state themselves (& you don't have to care about their instantiating or life-cycle at all)

Q4: So, where (& when) are DI containers really needed? Or should they be dropped for good?
DI containers are very useful ...

  • ... for common, widely shared, general purpose dependencies (e.g. cross-cutting concerns - like logging adapters or DB connection pools)
  • ... if dependency consumers don't want to know anything about the tricky details of their instantiating / life-cycle management
  • ... when binding registration can be centralized or managed / verified (preferably in build-time / deploy-time / test-time / startup-time) in a reliable way

Pic: © cassis - Fotolia.com

Share this post