I've decided to publish this blog post because it depicts the way of organizing the work that is non-linear, highly async, and requires constant navigation across vast amounts of multi-dimensional, interconnected data. IMHO such a problem space requires a very unorthodox approach. This is a story about how I've managed to handle it (IMHO: successfully) - maybe you'll find it helpful as well.
I bet my everyday work routine (as a Principal Software Architect) is quite far from typical (for a software crafts(wo)man):
- I work with plenty of people (customers - startups, internal service teams, community figures, salespeople, etc.) every day.
- I deal with scenarios from different industries & geographies; on various levels of abstraction, maturity, and completeness.
- I switch contexts very frequently, maneuvering between re-active (someone wants something from me) and pro-active (I'm building something for others' benefit) modes.
- The universes I traverse (AWS services in constant motion, dynamic startup environments, inspiring ideas to be shared) are inter-connected and cross-dependent (e.g., a startup I collaborate with may be interested in a recent service update).
- I keep referring to data that is in constant motion, the meaningful changes happen on the daily basis
This couldn't have been further from working through a sprint backlog or a project plan. I have a huge degree of freedom, but with great autonomy comes an even higher level of accountability. In short words: I'm expected to make a real difference/leverage in everything I do.
Without diving into the details, it means I need to handle dynamically vast amounts of data (updates to services, startups' evolving strategies), identify their correlations (what may be applicable to whom), identify and prioritize suitable actions (e.g., I have a critical mass of customers interested in AI/ML to organize a deep-dive workshop). Well, there's a reason why AWS Solution Architects have an internal nickname of mentats (iconic human computers from Frank Herbert's "Dune" series).
I've learned quickly it's NOT the kind of work one could organize in a mailbox. Or within a to-do list. Or even on a Kanban board. I needed an effective method to collect, continuously adjust and navigate complex interconnected data. And what's even more important - to make sense out of it (digest insights/conclusions/knowledge) when needed.
I had to organize my work entirely differently.
Applying the critical thinking
My first observation was that there were some patterns in my work:
- there were a finite list of certain types of events happening (customer having an issue with their architecture, service receiving an update, new valuable source of information published)
- each of those events has its specific properties (e.g., customers use certain services and look to implement particular capabilities, service upgrade provides new capabilities and solves concrete problems, etc.)
- typically the action on data collected happens in a different moment (later) than the collection itself, e.g., I need to assemble an audience large enough to conduct a workshop, or the customer may express interest in a service feature published a few months ago
Hmm, would it be possible to structure all the information I collect for future use somehow? In the form of a domain-specific dialect suitable for the specifics of my role? Hmm ...
Framing the problem
That was enough to draw some meaningful conclusions:
- The thing I don't want FOR SURE is the notification-driven workflow, full of reminders, pop-ups and interruptions - I prefer to pull the information I need with some sort of pre-designed queries (with variable parameters). I don't need nagging, it's counter-productive.
- I need to carefully log (daily) all the information that could be useful in the future.
- Daily logging is pretty much append-only (think about some sort of a log/diary) and should be heavily indexed/tagged (which services/customers/problems/people it is related to; what are the due dates, priorities, kind of activity to perform, etc.)
- In parallel, I need to maintain (by updating daily) evergreen knowledge inventories: customers, services (ours and competition), events, even principles/tenets (we refer to).
The former (log) would serve mainly the purpose of tracking obligations, progress, "eventually consistent updates" - because it's super-cheap to maintain.
The latter (knowledge inventories) should be the main basis for analytical queries/searches that help me make and prioritize decisions. Its updates have some latency (more effort/time is needed).
To build such a model, I needed a tool that's both flexible (allows semi-structured information) and well-organized (provides query/search syntax to make sense of collected data). Fortunately, I already knew such a tool - Obsidian.
Obsidian works in a way that is very similar to my favorite knowledge management tool - Roam Research. But there are a few crucial differences that make Obsidian much better for the before-spoken scenario:
- The approach to the data - Obsidian works (by default) on the local vault of markdown files; I can persist/back them up in a fully compliant way with my employer's regulations (we're speaking about strictly confidential data here!).
- Obsidian is not an outlining tool; it works on (markdown) documents; those documents can have a templated structure (e.g. each customer has a set of structure of properties which is 80% the same, the property values are different ofc) representing some sort of a DSL (domain-specific language) - the one that corresponds to the patterns of my everyday work.
- Obsidian works nice with both hierarchical and graph data - I can both interconnect concepts (using bi-directional references) and nest them using logical composition (e.g., I have a collection of customers, so I nest them within a group named 'customers')
- Tickable checkboxes are first-class citizens in Obsidian. I need such a 'binary state' entity to represent the 'state of completion' of a work item (that I put in my daily log): e.g., I've answered the customer's inquiry, I've read the new blog post, I've shared the invitation for the immersion day, the customer has sent the missing architecture diagram, etc.
How I use Obsidian
Obsidian is a very flexible tool. It doesn't enforce any format or structure. That was a good thing, mainly because I didn't know that structure initially. I knew it has to evolve (building the version I'm considerately happy with took three months), so my crucial requirement was the tool enables (or at least doesn't prevent) such an evolutionary approach.
I've started by defining key entities (nouns - notes in Obsidian terminology) and their main characteristics (adjectives - tags). Nouns of the same conceptual category (e.g., people) were grouped together (as subdirs: and this grouping was visible in the link text tokens). The definition of the noun is quite flexible - it covers both AWS service (e.g. EC2) and a particular date. What do they have in common? They are bi-directionally referencable by other nouns.
The verbs were typically represented by structural blocks (in a pre-defined format) within a note (did I mentioned that Obsidian is far more a markdown editor than an outliner?). So each time I was searching for a verb, I was in fact using a regular expression corresponding to the assumed format.
I used templates to help maintain the note structure (so all the blocks representing verbs were initially present - but I was free to remove them if needed). Note/attribute name auto-correction helps me to keep the model consistent (avoid typos, etc.).
Reaping the fruits
Writing down all this stuff in a disciplined manner is one thing. Yes, I like my stuff organized, but it's not where the value really is.
What matters most is how I do interact with this data. Obsidian lets me navigate the knowledge base in a blitz, in a way that feels natural (well, it's my DSL after all, it should be intuitive to me). Thanks to the bi-dir links, tags, and powerful query engine (that recognizes blocks, sections, to-do items, etc. and can be embedded within any note) that makes a proper use out of them, I can pretty instantaneously:
- find out what follow-ups from customers I should expect by today (e.g., action items they've committed to), or (in fact) any other date
- find out who could be interested in the recent update in service X (because they already use it / are planning to use it)
- sensibly navigate across new content I didn't have time to get familiar with (e.g., by analyzing how frequently the service it refers to appears in my most pressing topics)
- identify the un-linked references (when the same concept appears in several situations) and link them together - such "topic clusters" help identify interesting new facts that could be addressed together (e.g., it's the 3rd customer in a month that has mentioned they have an issue with competitor's service XYZ)
- flexibly adapt my tooling to the ever-changing conditions of my work (by creating new nouns, verbs and updating the templates that refer to them)
Knowledge is power.
Being able to link facts, refer to past events, and making sense of joint information that spans time and space gives you an incredible advantage. The human brain is not bad in that, but it has its limitations, especially when the information is updated frequently and its sheer breadth and size challenges your cognitive load limitations. Obsidian is one of the tools that can help you somehow extend your brain's capabilities. It doesn't solve the problem just by itself - the whole secret is in how you set it up (to compliment your own capabilities), but it can truly make a difference.
At least it did in my case.
P.S. Yes, I still use Roam Research for my private topics. It's an entirely different use case, and RR truly shines there. I don't mind paying for two license fees - both tools are definitely worth their price-tags.