One has to be remain realistic - sometimes even positively hyped, mature, proven ideas with strong foundations can't break into mainstream. That has happened (is happening) for Actor Model (AM) & I believe that quite likely it will stay like this.
It's much to be said for this particular model:
- it's a foundation of battle-tested, actively developed & professionally supported platforms (OTP, Akka) with strong, positive communities
- it works well with some other modern paradigms (reactive, functional, loosely coupled)
- it really solves the problems it was aimed to solve (building resilient & scalable distributed systems)
But years keep passing & AM, regardless of the attention it has recently got, remains more a curiosity than a valid option used in real-life scenarios. Why is that? What's the point in having such a great tool available, but not using it?
According to my observations:
- Simplicity - in theory it's a very simple model. But in practice it's easy to learn, but hard to master. Additionally, it has far more "moving parts" that can break than simple RPC-alike approach (that in many cases barely differs from calling object's method). Its deceptive simplicity causes painful (yet common) mistakes like basic 1:1 object-actor mapping (that NEVER ends well).
- Granularity - consequences of a bad actor composition are far more significant than these of bad object/class split; keeping in mind that actors tend to be truly fine-grained (to avoid blocking) - it's not really that hard to break things utterly just by bad design (that could somehow get away in OO design).
- Fragility - regardless of what experts say, "let it crash" approach almost causes heart attack for many individuals ;) manual supervision CAN be really tricky, requires a lot of attention & careful design - you're preparing for "shit hits the fan" approach, not for the happy path - many developers just don't feel prepared for that challenge at all.
- Debugging - writing elegant, readable & concise code is one thing, but troubleshooting/debugging it later maybe be a completely different kind of story; and in case of AM - it usually is (a pain in the #$% ...), especially when compared with simple nested call hierarchy for traditional, synchronous request-response calls.
- "Tangible" messages - AM's messages are far more tangible (but still short lived & volatile!) than non-tangible RPC requests -> they may end up in overflowed actor's queue (on various "levels"), in Dead Letter or just processed later than expected (e.g. after other messages that have been generated later) - all these "quirks" can be coped with, of course, but it requires deeper understanding, paying attention to details & a lot of forward thinking. What is more - a transient state is very hard to grasp.
- Users love synchronicity - well, yeah - users primarily love responsiveness, but don't want to get it due to asynchronicity (non-freezing UI) anymore - these days they DEMAND instant service responses, because they don't want to wait, even with smooth UI/UX; Request-response fulfills that needs, of course as long as it's lightning-fast.
I know that some may find it ridiculous & annoying ("hey, aren't devs supposed to be smart people?!"), but AM seems just to hard for the gain it'd provide for average developer of average system. Obviously it doesn't apply to devs who build massive-scale distributed systems supposed to withstand the highest levels of virality - but how many of them are among us? They get a lot of admiration & attention, but they are (& will remain) niche.
Back to the lame AM adoption - it's not just my observation, what do you think are the reasons behind the following events?
- Microsoft simplifying AM ridiculously within Azure Service Fabric (no manual supervision, no behavior changes as in state machine, no explicit messages)
- Lightbend first hiding Akka behind Play & then coming up with alternative "Microservice" framework, named Lagom
- somehow simplified model using in Google's Go (coroutines + channels) gets far more market adoption than alternative implementations mentioned above
Is it bad? Should we cry our eyes out because such a great concept has been rejected by the Community / Software Vendors / everyone else? I don't think so. There's no perfect approach to write software - it's always about some kind of trade-off:
- between DX (developer experience) & maintainability
- between security & cycle time
- between performance & abstraction level ("distance to hardware")
- etc.
In this particular case, simplicity tends to weight far more than scalability & elastic resource utilization - cost of additional hardware becomes less important than cost (/time) of developers' work. Complexity is still a significant obstacle in sustained delivery - many have already learned that complexity itself is in many cases inevitable, but as long as it's within a Domain, not technology.
Pic: © giorgiomtb - Fotolia.com