Real-time configuration server: the idea that doesn't get love it deserves

Dealing with technical configuration of your apps may be tricky. In a simple case you just slap a text file or two and there's no harm done, but once your...

4 years ago

Latest Post Mixing warsaw.ex by Sebastian Gebski

Dealing with technical configuration of your apps may be tricky. In a simple case you just slap a text file or two and there's no harm done, but once your solution gets more & more complex, all those pesky files get really cumbersone:

Obviously, manual configuration adjustments should be prohibited - I've already written a blog post or two about that in the past. BUT even if you split your configuration into static and environment/machine-dependant part and you use an automated tool to merge them during deployment (so called tokenization), once you need to adjust the configuration on several machines, you still have to make a re-deployment effort. Again - in distributed scenarios it's a real pain in the ass.

Fortunately, there's a different way to deal with that problem: just replace an up-front configuration in files with real-time configuration server, that is known to all your services / application.

The Zoo


... all of these happen in real-time:

Cool, but why?!

  1. Information is kept in one place (yes, we'll talk about hotspot / single point of failure later)
  2. Information may be changed without re-deployment of beneficients.
  3. Due to versioning, you may make a rolling (phased) roll-out to multiple services
  4. If you do it the smart way, you can have a 1 source of conf information for applications that operate on different platforms (tech agnostic).

Aren't we hurting ourselves?

Surely, it's not a perfect solution, you've just created another dependency - very STRICT dependency: without your configuration service, your applications most likely won't be able to properly set themselves up.

What is more, this really sounds very much like a single point of failure - without configuration service running, you're pretty much screwed. And the more clients use it, the more efficient it has to be (to keep the response time as short as possible). Some caching may be adviced and caching usually produces next bunch of problems...

All of these points are TRUE, but ... (there's always a but) if you go for a proper, clustered (and scalable) solution, these risks drop. The best news is yet to come: you really don't have to create such an solution on your own - you can use an existing (& proven) package ...

The King of the Hill ...

... is named Apache ZooKeeper. Officially ZooKeeper is just a hierarchical, distributed coordination service, but it's pretty much perfect for dealing with real-time configuration:

... but there are more

The cool thing is that you're not limited to ZooKeeper only anymore. HashiCorp (the company behind Vagrant) has just released their new tool - Consul.

Consul is explicitly dedicated for service discovery and configuration management. It shared ZooKeeper's simple architecture approach (uniform nodes that communicate via Gossip protocol), it aids the failure detection by an interesting approach to healthcheck published endpoints. Plus - it's still very, very simple and easy to set up. Most probably, the only discouraging thing about Consul is that it's still very fresh, so some infancy-period problems may still appear.

But if you're interested in drilling down to some details about Consul, feel free to check the following link:

Configuration's role in your march towards CD

Personally I wouldn't belittle configuration's role in the delivery pipeline. To enable the true continuous delivery, you need to provide both development & ops agility:

All of the points above are addressed directly by real-time configuration servers. They are an important factor in reducing the real-time coupling effects in non-ESB scenarios. Do you need additional incentive?

Sebastian Gebski

Published 4 years ago


Leave us your opinion.

Subscribe our newsletter

Recieve news directly to your email.

No Kill Switch © 2018.