TL;DR Architecture should be kept as close to code as possible, same applies to docs - otherwise they are a subject of near instantaneous entropy. But what about possible items to be fixed - differences between architectural vision & actual implementation, possible deviations from agreed conventions. My proposal is to keep them out of ticketing system & use code-level metadata instead. Treating such an incident as another semantic construct provides certain qualities, makes it harder to omit (keep omitting ...) the issue & possibly shortens the feedback loop.

Disclaimer: all examples below are in C# and Powershell.


Few days ago I was presenting at my favourite local meet-up group - WG.NET. My session was about automating architecture governance using concepts like Continuous Inspection & Convention Testing - something many people know about, many people admire (as an idea), but very few people do in practice. I'm not going to cover all of that in this blog post, I'd like to focus on one, very simple concept that seems (based on feedback I got afterwards) to be the most controversial among what I've been speaking about:

The idea of putting identified issues directly in the code - as close to the root cause of the issue as possible. I call them "FixIts".

Metadata is a powerful concept

What does it even mean?

  1. During code inspection (review, troubleshooting, writing new code - literally whatever) I identify a quality issue with code: bad engineering, bad design, broken architecture conventions, etc.
  2. I annotate the erroneous piece of code (class, interface, method, property - whatever) with metadata that doesn't do anything itself (there's no code that changes anything in runtime), but is syntactically bound to problematic code construct
[FixIt(AssignedTo = ..., Type = ..., Severity = ..., Problem = ...]
public class CodeWithAnIssue
{

Of course not all kind of issues can be handled in this way:

  • if tester identifies a bug, I won't ask her to put it in the code - QA should stick to tools they feel comfortable with, e.g. ticketing system
  • needless to say - it's not always clear where is the exact root cause of functionality bug, sometimes finding that out is 90% of work

So this technique is reserved mainly for architecture-related issues found during manual code inspection or just reading code due to whatever reason.

"Start with Why"

But why to do it in the first place? What's the benefit?

Well, the goal is to "couple" problem with its visible indicator. Devs hate issue trackers, these get cluttered with more & more issues over time - sooner or later no-one pays attention to stuff older than X months or with priority lower than Y, duplicates get less & less manageable, etc.. Actual issue gets forgotten, then accidently re-invented, again forgetten, et cetera. But if you just stick a "FixIt" note to the code itself, it's hard to omit it once you just can't avoid having it in front of your eyes.

Yes, it's also about a psychological effect - the point is to make developer remember about the issue that's still there & the best way to achieve this effect is to use code: devs spend time with the code, not the issue tracker.

And at last but not least, it's about minimum treshhold to register the issue & collaborate on the issue - no separate tool involved, no chance for "breaking the link" (issue in tracker relates to code that has already been refactored & is now named in a different way) - what you do is just: slap an attribute, make a commit, issue a PR to proper recipient (/owner) - easy peasy.

But why metadata (annotation, attribute, e.g.), not just a comment?

The idea is not just to have this information in front of engineers' eyes, but to make it a bit more structured (hence more usable), so:

  • there's some obligatory information to help classify the problem (priority? team? impact?) - preferrably something that won't have to be changed over time
  • it can be extracted e.g. by code parsers / compiler SDK API - to provide some tracking capabilities (over time), maybe statistics or just searching through
  • syntactical constructs usually have some support by IDEs or other tools of everyday's programmer use -> this simplifies working with them (e.g. "find me all FixIts assigned to me within this assembly") as 1 short-cut in IDE

Controversies

No solution is perfect, neither is this one.

If your team(s) ignore the annotations, their number will just keep growing & after some time (beyond some number per number of class/module) they'll just significantly reduce overall code readability - the key pre-requisite here is to remain at reasonable level.

There's one another potential problem related to teams' maturity - as issue is a piece of code, it can be added, changed or removed just like any other piece of code - in theory nothing prevents someone from just deleting FixIt w/o fixing the actual issue or saying anything to anyone. That's why engineers have to really embrace the idea behind the topic, instead of treating this as a rotten egg to get rid of ASAP or roast opportunity to smash someone they don't like.

In my case I trust my engineers, but I also like to have some sort of inspection mechanism to be able to verify what's really happening :) I've created a tool named Muninn that dumps metadata-based extracts from code & exposes this data as PS-Objects that can be transformed, filtered or searched-through from the level of PowerShell console (code examples below are simplified):

// show top 5 FixIts for particular team, sorted by their severity (descending)
Cake -Task Find-FixIts | Where-Object { $_.Team -eq $teamName } | Sort-Object Severity -Descending | Select-Object -First 5 | Format-Table

I also dump this information into markdown extracts that are being injected into DocFx, to create evergreen technical documentation that always corresponds 1:1 to the code (CI loop takes care of that):

// make markdown report out of all FixIts & dump it into file
Cake -Task Find-FixIts | Convert-ToMarkdown | Out-File $fixItReportPath

Such simple techniques can be very helpful to FixIt "governance" :)

Wall of shame equivalent?

OK, but how does this concept differ from infamous "wall of shame" concept? Doesn't it just make people feel worse because of making their quality problems so visible & explicit?

IMHO the issue with "wall of shame" approach is that its stigmatization comes from "the worst" association - "these were the (top 10) worst issues, so these have to be the worst shitty developers, why won't have have a good laugh at theirs expense?"

FixIts are nothing like that - they are like any other "ticket" form - the only difference is that if a piece of code (not a person!) accumulates a lot of issues, it accumulates a lot of FixIts stacked visibly together. OK, this can lead to some assumptions and / or judgements, but the idea is not about having a full codebase review every 3 months (which would result in hundreds of FixIts appearing altogether in a very short period of time), but to slap these continuously on daily basis, during normal, everyday work.

This idea can be twisted just like any other (e.g. if people don't communicate directly, only via FixIts) - but IMHO it provides a certain advantage without introducing significant new risks or increasing the probability / severity of already present ones.

Share this post