This is the 2nd post in the series. The 1st one can be found here.
A short recap of where we are: to tackle the complexity of the domain & model, we should be able to tackle four challenges - F1/ how to model reality correctly; F2/ how to "slice the elephant" (divide it into relatively independent parts); F3/ how to make "the elephant" navigable at the level of abstraction we focus on; F4/ how to document & maintain the model long-term. In the first post, I've covered F1 & F2. Now, I'd like to focus on F3 & F4. Buckle up!
P.S. In case you're wondering what inspired the cover image - check this out.
F3. Architect's helicopter
Horizontal split (a.k.a. "slicing the elephant") may still be insufficient to tame the complexity. First, there may be too many capabilities (promises or modules) to navigate easily. Besides, some of the capabilities may keep their contracts well-defined, but due to internal complexity, there may be a need to introduce another "level" of split (within an existing "module"). In the end, to manage vast solutions built by several teams for a long time, engineers may need to traverse ("vertically") levels of abstraction:
- Zoom in: start at the top-level overview (of the whole platform/system) & subsequently drill down into the chosen part until you reach the (lower) level of abstraction you are looking for.
- Zoom out: begin on the level of atomic business concepts/logic & gradually enlarge the visible range (while simultaneously aggregating concepts that make up some sort of a whole) to take a look at a wider scope of the model.
Think about it as flying a tiny helicopter around a Godzilla-sized elephant. If you come close, you miss the "big picture" but can analyze the pores in the elephant's skin. If you fly away further, you see the whole body (& its main anatomical parts), but not the details visible only in the closest proximity.
Various perspectives are necessary to deal with different risks/challenges/tasks. The higher-level ones are typically more useful in strategic work - which may be quite abstract (esp. for developers) conceptually. That's why it's crucial to preserve the navigation continuity - there has to be (even indirect, but continuous) connection between the high-level and low-level to ensure that the strategic perspective still corresponds to the actual model details (& the implementation of the solution - more on that below).
If you lose this continuity, you'll get stuck with a nice, simple-looking system diagram that looks good when presented on slides to the board, investors, or business stakeholders, but it has very little in common with what's really under the hood. It's supposed to be the same elephant, you're just adjusting the view!
Examples (of when such traversals are useful):
- You've been asked to incorporate yet another way of discounting the product - progressive discount based on baseline price. You don't remember how the discounting works by heart, so you need to find the corresponding part of the model. You start at the top level - in one of the main areas of your e-commerce: product management. Here, you can locate the product catalog sub-area. Within the product catalog, there's a smaller sub-area dedicated specifically to product pricing. And finally, you find out that product pricing contains discount rules.
- And the other way around - you've detected inconsistency/erroneous behavior in a single operation ("the commission for this transaction here seems incorrect!"), so you'd like to zoom out to assess its implications on other associated (e.g., dependent) capabilities and exposed contracts ("Per-transaction commission was definitely affected, but I see we also have volume-based commissions - are they too?", "Settlements depend on commissions; let's take a look at the contract between those to see if it has to change.", "Wait! If we change the settlements here, won't it affect the pay-outs that are the main consumer of settlement calculations?").
Some techniques help with preserving that continuity (better or worse) - e.g., the C4 model has a similar philosophy at its core. But none of them is a part/direct descendant of DDD, which does provide some building blocks on different levels of abstraction, but:
- It doesn't define clear, traversable relationships between them.
- It has some conceptual gaps (especially at the mid-level of abstraction).
- It does not scale (there's no flexibility regarding nesting abstractions, e.g., where the "complexity density" is high).
P.S. Btw., on a side note, don't you agree that the lack of this continuity was one of the main reasons why UML has ultimately faded away (as a model documenting standard/notation)?
F4. Evergreen models
Let's revisit some assumptions Domain-Driven Design has about models. First, design is all about modeling, and models should be represented in code (as code implements the model - this transition is supposed to be lossless). Models also constantly evolve to represent the ever-changing business conditions (requirements, needs, problems to be solved, etc.). If all that is correct, it's hard to identify a goal more important than preserving & maintaining a model long-term. Why so?
We need a long-living, always up-to-date ("evergreen") model to:
- Avoid constant reverse engineering of the model from code each time we have to make a change - that is an expensive & human error-prone operation. Besides, the model represents the business intent, whilst code mixes the model with several other concerns (aka NFRs - non-functional requirements) related to availability, resilience, and many purely technical aspects of the craft. All that may obfuscate the initial intent & make it harder to comprehend.
- Make sure the change is based on the same model that was used for prior implementation. Remember: there's an infinite number of good enough models. The new model proposed by your colleague (who is not familiar with your model or doesn't fully get it) may still be perfectly valid, yet the transition (of working code) from yours to his may be a considerable (& unnecessary) effort.
- Understand the proposed changes better - if they are phrased as a delta on top of the existing model (I've illustrated this problem some time ago here already). The model is supposed to guarantee a shared understanding of the solution between business folks & techies (that's one of the reasons why it's phrased in UL) - not just for the initial implementation but across the whole lifecycle of the solution.
One of the hardest things in cooperation with other people is adopting other people's mental models. Once we've set ourselves on some course, it requires a lot of intellectual flexibility, humbleness, openness & goodwill to switch - I know many super-smart people who struggle with that (which makes them harder to collaborate with and potentially prevents them from becoming awesome architects).
That alone is a good reason to do whatever's needed to maintain models long-term (instead of ditching temporary models every 5 minutes ...) & invest in making them as comprehensible as possible (with examples, diagrams, dictionaries, bi-dir links, etc.).
DDD didn't do much to support the idea of long-living, sustainable models. The closest associated concept was "living documentation" (described in a very good book by Cyrille Martraire), but it was never picked up for real by a wide audience. To put it straight - we were terrible at documenting things before DDD & DDD didn't help us improve in that area even the slightest.
DDD in 2024: ditch or ...?
In my opinion, the healthiest approach to Domain-Driven Design is to treat it as a source of inspiration. It has undeniably succeeded in:
- Naming some problems and putting them in the spotlight (unfortunately w/o giving a good prescription on how to solve them)
- Providing a bit of shared new lingo for software designers (some sort of a UL for modelers)
- Triggering beneficial discussions on the nature of complexity (& coupling, & cohesion, & various natures of the parts of the domain, & ...)
..., but it has also diluted the crispy modeling-related problems in oceans of distracting implementation-specific topics it has spawned within the last 20 years (like event sourcing, CQRS, aggregates, repositories, etc.). Don't get me wrong, these tactical details may still be very useful, but they should stay hidden behind strong, UL-phrased abstractions as they are not part of the model - do domain experts talk to you about repositories, commands, or aggregate roots?
Can these deficiencies (F1-F4) be tackled by DDD? Could DDD regain its focus on what's truly important? Is there a way for DDD's redemption (/restoration) in 2024 (& beyond)? Well, my answer would be "yes & no".
The next (3rd) post in the series can be found here.