The inconvenient truth about new stuff

I play a lot with new stuff - frameworks, libraries, tools, whole platforms or languages. Development toys, operations toys, process tooling toys even. And there's a lot of goodies to play with as new ones appear pretty much on the daily basis. I do it because I like it, but there's a more serious reasoning behind that as well - one could say that it's a clue of my job:

  1. I don't work on one product, in one company, on one, chosen technology stack - as a consultant I keep encountering very different situations, in different IT ecosystems, based on a variety of different platforms. I have to be capable of dealing with realities like that - I can't be an expert in everything, but I have to have some level of common knowledge out of my comfort zone as well.

  2. Learning how stuff works & comprehending the differences between various ways to approach the same problems increases my knowledge per se. It's crucial for a "field architect" (as opposed to "slidedeck architect") to remain up-to-date & understand how things work "under the hood". The only way to do that is by practice, nothing else works.

BUT, on the other hand, being in sync with the bleeding edge doesn't mean that I update my "(in)formally recommended stack" every week or so. It doesn't mean that I keep upgrading / re-writing my skunkworks projects each time a new framework / library appears either.

We're getting to the point now, so focus. New (or upgraded) stuff may mean tangible improvement in terms of:

  • efficiency (velocity, developer's agility, friction, shortening feedback loop, etc.)
  • performance (robustness, response times, resource consumption, etc.)
  • quality (testability, error proneness, etc.)

BUT this improvement appears IF AND ONLY IF the maturity of your technical excellence is above some (relatively high) threshold - in other words: you're already shining in terms of technical delivery. Seriously, this is really important. If you (your team, company, etc.) are struggling with fundamental topics like:

  1. test automation
  2. mature (repeatable, reliable) environment deployment
  3. continuous integration & working feedback loop
  4. development transparency
  5. dependency management

than it's not really technology that's your problem. And upgrading (or swapping) technology WON'T solve any of your problem(s) (if it does, it's just by chance). These fundamental goals (mentioned above) are achievable on pretty much every relatively modern tech stack - just pick the one you (your team, company, etc.) know best, set your shoulder to the wheel & get the shit done finally, instead of blaming the tech.

Oh, well, I'll be even more straightforward: the ability to deliver is not related (or just very barely related) to technology itself (for business-oriented software, it's not that easy for so-called infrastructure code). Teams capable of delivery in general, will deliver pretty much regardless of technology (the one they want to commit to, not forced one, of course).

So if you're struggling with ...

  • ... with Puppet, moving to Chef / Ansible / Salt ...
  • ... with Angular, moving to Ember / Aurelia / React ...
  • ... with AWS, moving to Azure / Heroku / Rackspace ...
  • ... with Visual Studio, moving to Code / Atom / Rider ...
  • ... with Java, moving to Scala / Clojure / Kotlin ...
  • ... with Grunt, moving to Gulp / Webpack / Brunch ...

won't (quite likely) change anything for good - rather the opposite. Stop being naive - with the current generation of modern software & hardware it's usually "the protein factor" that is the weakest link in the chain. The final success is a consequence of proper skills, communication & disciplined delivery, not a technology chosen (unless your pick is really silly ;>).

Pic: © enterlinedesign - Fotolia.com