Last few weeks have been quite busy for me and my gang of Merry Men. We're doing a lean project of development architecture refreshment for a large enterprise and we're not wasting time - we aim to deliver the changes as quickly & frequently as possible (even if it means smaller 'chunks').

Our hurricane went through source control, work-stream organization, package management, SIR tracking, task management, peer reviews and now it's heading towards configuration management - guys have a done a lot of awesome work, but it's still a long and dangerous path ahead.

Pain in the @ss

But of all those changes (some of them were true turn-overs) there was one that has definitely caused far the biggest unrest (the disturbance in the Force): introducing the peer reviews as an obligatory element of the process.

What does it mean 'obligatory element'?

  1. We're not forcing any kind of reviewer choosing algorithm - it's up to teams.
  2. We're not delaying anything due to missing review: compiled artifacts end up in a repository, stuff gets pushed to envs, etc. - so the development process goes as smoothly as it did.
  3. We just provide the tools (quite a nice combo - Crucible + FishEye) and bind the review with an incident in a SIR tracker. PLUS we generate automated reports on who's late with the reviews.

To be honest - there are not consequences for people who slack or don't give a f@#$: it's just visible that they do. And it suddently became a problem ...

'Daddy, they are forcing me to do a Review!'

Let me share some comments we've collected:

'Not every commit requires a review'

ORLY? And what kind of commit doesn't? Configuration change? Patch to DB? A 'trivial' one?

A change is a change - it's error-prone, because it's made by humans and humans err. Sometimes because they don't pay attention, sometimes because they are tired, sometimes because they didn't understand something - reasons vary. But there's one thing that is invariable - each change can cause a disaster: directly or indirectly (due to chain of dependencies).

'We're adult, we want to decide on our own. Reviewing such a simple change is a waste of time.'

If it's such a simple change, review should be fast and simple as well. What's the problem then? Do it and move along.

'It's a technical commit. It doesn't need any review.'

There are no 'technical commits'. Or quite opposite - all commits are technical. Whether you're making a new branch, bumping the dependencies, switching the externals -> all of those changes will cause some effects on your artifacts and environments. That's why the review is needed.

And if by a 'technical commit' you mean such one that is more related with development process than actual functionality, this one would require the review even more as it's far harder to get automatically tested.

'This offends us. Generating this report proves that you think we're idiots.'

The control / monitoring tools are not meant just to be painful. They are to be an actual HELP, to validate and aid people efforts. How? To make sure that:

  • people are consistent
  • they don't forget anything
  • the quality check on each level is performed within a reasonable time-frame

The changes in source code are to be as transparent as possible - I've actually encountered man-in-the-middle attack (enabled by a piece of malicious code) in my past: when it happened, there was no early bird trumpet to warn us.

The truth beneath

Straight in the eye - people avoid doing Peer Reviews because:

  1. They don't like confrontation (and review is some kind of confrontation).
  2. They don't want to take responsibility for someone else's code - and reviewing other person's code implies some kind of responsibility ('I've said that it's ok, so if it fails, it will be my fault as well.')
  3. They don't want to get embarassed if anyone else finds a weakness in what they did.
  4. They don't want to risk good relations with peers who may react badly to criticism.
  5. Sometimes they are just afraid that the criticism may be right and they will get finger-pointed or their reputation will dive.

Some realize these reasons, some don't (or fool themselves that they don't) - but the reasons are always the same.

The benefit is for real

But the benefit is for real. Peer Reviews:

  • boost the quality up - by finding the errors that could get unnoticed or by pointing out that sometimes there's a better and simple enough solution that could help with the technical debt

  • level up the technical knowledge - people with different level of experience talk to each other, share opinions and discuss solutions - and those discussions are not theoretical, but they regard the actual code: is there any better opportunity to teach and learn?

  • help with spreading out the solution-specific knowledge - now it's not only the author who know what's under the hood: there will be at least one additional person who's expected to know the code and understand what it really does. No docs need.

  • 'force' people to actually talk to each other - programmers usually aren't the life and soul of the party, ya know. Actually most of them secretly prefers to do their own stuff and throw the 'teamwork bullshit' out of the window. It's not uncurable - it's just a 'mental bareer' to be crossed and making them interact with each other during Peer Reviews seems to help a lot with that.

That's why we're gonna be persistent.
If Peer Reviews process or tools suck, we'll get the feedback and improve them.
But Peer Reviews in general are not negotiable.