We just had a roll-out weekend at our client (large corporate Bank). In this particular context it means that 5 months of 40 people-strong developer squad’s work has been deployed on production environments. Just to give you some plain numbers:

  • 1 deployment night
  • 8 developer teams’ work
  • 52 different applications (service-oriented, heavily communicating with each other)
  • 81 source-controlled branches
  • 588 unique deployment-ready artifacts (each of them independently versioned) - assemblies, config packs, etc.
It went very smoothly. Two people (in parallel) fully deployed all those apps in approximately 4 hours (with all the checks, synchronizations and smoke tests). We had one, single deployment instruction. Four pages long. Easy peasy.
How is it possible? What was the catch?
Before the deployment night, we’ve been deploying all those applications at least twice a day, first on integration test environments, later on both integration & regression test environments. Day by day, using exactly the same deployment mechanism that was used later on production. 100% automated. No manual steps + no config files preparing + no additional scripts executed on databases = no place to make a mistake. Our configuration is 100% generated from template files with token sets prepared for production, our services and service clients use addresses received from real-time configuration servers (so there’s no possibility of binding mismatch), our artifacts were taken from registry-based artifact repository. We’ve managed to force all those apps into one rigid model of deployment, regardless of architectural differences. And now we reap the benefits of that.
Automate. Perform. Perform Again. And Again. And Again.
Rinse. And Repeat.
It will work when it’s THE day.
Share this post