I had quite an interesting discussion about frequent delivery of value via changes in software. The topics itself is not new and everyone who knows the basics of Agile / Lean approach (or just Theory of Constraints) knows that stashing the inventory should be reduced to minimum - one should deliver as frequently as possible (while making sure that delivered chunks actually bring some value), even if it means smaller portions.

The Context

In this particular scenario I'll write about, team is delivering more than 1 feature in the same time, BUT people work on feature sets in pairs - each pair is working on one, single, particular feature. Here are some other, relevant details:

  • features are loosely coupled - it means that there are some dependencies, but they are not numerous and quite simple
  • features bring value independently - this values sums up when features sum up, but even single feature will bring some value when introduced
  • each feature lies within different application - it means that pairs don't have to introduce overlapping modifications in the same codebase
  • features don't require many people to get developed -> two people for each of them is about just fine
  • IMPORTANT - features exist in distributed environment and due to some environmental constraints majority of them can't be introduced in one, big bang. It means that if application A is modified and it currently exists in 10 instances, swapping all the 10 instances to new version may take months (they will be updated 1-by-1 every week or so).

Consequences

Obviously, this set-up has some consequences you need to realize - here is the short version of team's findings:

  1. As features are still somehow dependent, people have to synchronize their changes constantly (pair to pair).
  2. Due to coupling - features have to be continuously integrated and tested against each other: that also means their deployment IS bound: if feature no.1 is delayed, feature no.2 is delayed as well, because it depends on no. 1.
  3. As the features are in different products (applications), the number of applications deployed will be wider - and that will increase the risk of failure (more changes = bigger risk of failure).

Team's recommendation

That's why team has recommended changing the approach in a following way:

  1. We should introduce 1 feature at time. Other features should wait until:
  • we are sure that previously introduced features are all up and running.
  • we've fully deployed the new version of the introduced feature (mind the gradual introduction remark given above)
  1. As the features are introduced gradually, all the interfaces have to be prepared in more than 1 combination of communicating components, for instance:
  • Feature A ver 2.0 (new) <-> Feature B ver 1.0 (old)
  • Feature A ver 2.0 (new) <-> Feature B ver 2.0 (new)
  1. Such an approach will:
  • reduce the risk of errors occuring
  • still let the Team deliver the value in gradual (Agile) approach
  • help as make sure that we're getting rid of old version of components

Where's the catch?

Unfortunately this kind of thinking seems a bit flawed (even if the priorities makes sense at the first glance).

  1. Avoiding interdependencies is awesome, but sometimes you just can't - the stuff is related and you can't help it. That means that YES, you need to synchronize work but:
  • for gods' sake - these people are within one team: this is not about external dependencies it's just about talking to each other (pair to pair)
  • time-box, time-box, time-box: it's exactly release'ing the small chunks that will save the asses if one feature is about to slip a bit. How much it can slip? One time-box. Who would care in such case?
  1. If Team consists of 4 such pairs, but only 1 of them can introduce their changes in the given moment, what happens with the deliverables of other ones? They get into the inventory - the exact place we don't want them to rot in.
  2. Creating more and more variants of interfaces (because of the version combinations) to develop and maintain is not only an additional cost (and waste ...) but also seriously increases the probability of error occurance. Especially in this case - due to scenario specifics, some tests won't be automated.

Conclusion

Here are my thoughts:

  • being "Agile" doesn't mean that we can release only software that has no dependencies - otherwise we block the dependent software
  • the number of errors / bugs / problems in released software is not a simple function of number of applications modified -> it rather depends on:
  • the quality of integration (and how continuous it is)
  • the quality of end-to-end testing (is test environment set up and proper, do people understand the scenarios to be tested, are the scenarios really covering the most crucial functionality)
  • the quality of the commmunication itself: if people think they can close themselves in their own aquariums, this wil NEVER work, regardless of the approach
  • The Most Important: in the end, I can't (and shouldn't) to force team to introduce software in a way that is not convincing to them; if I do, how are they supposed to take ownership over products they don't believe in?
Share this post