I'd like to emphasize once again that I speak on this blog for myself only. Opinions / statements expressed here should not be associated with my employers - past or present.
The dish of the day is ...
Some buzzwords get "global" range, while some get a good grip only within a particular context - one of good examples is so-called "Multi-speed IT" (sometimes referred to as "Two-speed IT" or "Bimodal IT"), which gains exceptional popularity in large enterprises.
What is Multi-speed IT then? It's a concept of embracing heterogeneity in own IT landscape & distinguishing different speed-lanes across whole IT delivery (not just architecture):
- legacy, poorly maintained, monolithic back-ends follow waterfall development model with manual operations & QA (main justification is reliability) ...
- ... while in the same time modern front-ends (overhauled or built from scratch) utilize Agile methods & modern development practices (CI, automated deployments, etc.) (main justification is rapid feature development).
Hope for the (almost) hopeless
If you're not aware of that, one more remark: Multi-speed IT is frequently presented as a Mekka for corporations who have a lot of trouble with their systems - suffer from ridiculously long delivery life-cycle, struggle with regression testing during each release, can't even afford a reasonable number of sensible-quality test environments. These companies are so hopeless that they believe that it's perfectly OK to continue with rotting back-end, because all this can be compensated with good-looking, shiny front-end (Web SPA, native mobile, etc.).
Briefly speaking: it's rubbish.
Or rather: it's rubbish in majority of scenarios - the ones that assume different speeds for separate, but interdependent, intertwined layers (in vertical setup). Needless to say it's a completely different story if you want to have different speed for separate silos (in horizontal setup) - in such a scenario multi-speed is absolutely OK.
Something stinks here ...
But let's get back to the more typical case -> different speed for layers (of the architecture). Narratives that promote such an idea typically refer to "loose coupling, exposed APIs, service-based architecture" that help with layer separation & enable maintaining different speed for front-ends & back-ends. But seriously ... if you have a modern front-end application, typically stateless & fully dependent on service layers below (because it lacks any business data persistence itself): it may be decoupled from the IMPLEMENTATION of the services, but it's not decoupled from API DEFINITION. So:
- you can make fast, independent changes in front-end w/o API change, but these will be purely visual changes (presentation logic) that can't really affect business logic at all
- if you want to change API definition (to provide new/enhanced functionality), you need to change back-end anyway (to implement modified API definition) - so your change will have to be end-to-end
Needless to say, adding another layers "in-between" (e.g. additional service layer for containing additional "business logic" - in fact basic data transformation ...) changes nothing: you just get more "contracts" between layers, but what really matters is where actual domain logic is contained: if it's in your deepest, legacy, high inertia back-end, you're in trouble - the lowest layer to be modified (usually the most troublesome) determines the overall speed.
Instead of enabling testable, feature-oriented end-to-end development within multi-disciplinary teams, such a multi-speed approach fortifies
- strict boundaries (so-called "component teams")
- fragmented (hence unmanageable) products
- local (aka meaningless) success criteria
Needless to say - in such a setup the number of dependencies (& communication overhead, & coordination effort, & ...) doesn't decrease! All you can do is to tweak UI/UX as long as you're not touching actual business logic - IF & only if your API is well defined.
Why not solve the real problem instead?
OK, so let's put bullshit aside & think about REAL alternatives. What can truly be done if you have a system-of-an-anchor, but want to speed up the delivery cycle? Why don't you address the actual root cause of the problem then?!
- strangler - the most straightforward option is to use a strangler pattern: abstract out the API (use open standard), but instead of developing business logic changes in slow legacy, start re-writing the implementation of the API: piece by piece, chunk by chunk. To avoid repeating the errors of the past:
- use open platform with reasonable DevOps tooling
- keep the granularity low
- don't leak implementation abstractions in API (make it consumer-driven)
- apply good development practices (automation of testing & deployment, code reviews of all commits, not sharing persistent data storage between aggregate roots, etc.) since the day #1
- new silo - this is the variant of "strangler" when domain implementation in legacy is utterly broken (so there's no way to create a reasonable, consumer-driven API); in this case you just create a new API & implement it in small chunks - you do it each time you either add or modify anything, effectively reducing the changes in legacy to critical bug-fixes
- crunch-it - this ain't an option for crybabies: the idea is to implement modern paradigm (e.g. CI, CD, automated provisioning) from scratch on legacy platform: in may involve a lot of effort, it may be hard due to limited tooling, it requires legacy expertise, but as these are relatively easy concepts, in 99% of cases it is feasible (IF you have people who'll are really up for the challenge)
- modernization - "delegacize" the legacy: move it out of architecture / platform / system / language that is a main issue, the more automated the way, the better (translate / convert / migrate). This is not always a good idea (if architecture is a limitation, moving the same architecture to different platform won't solve anything), sometimes it may introduce new issues, but in some cases it sounds like a viable option
Pic: © Goodpics - Fotolia.com