That's the second (& final) post in the mini-series. The prior one can be found here. Where are we right now? I've explained why the "depth" of Gen AI usage matters. And I've already covered two of the total three obstacles on the way to using Gen AI "deeply" while coding. All is left to do at this point to reveal and clarify the final challenge ...
... which is:
We, software engineers, suck big time when it comes to expressing our thoughts in a written form.
How so?
As we all know, when communicating with LLMs, we need to clarify our intent with natural language (well, when Andrej Karpathy said that "English is becoming the hottest new programming language", he said that for a reason). And that requires a mix of two (IMHO, frequently neglected) skills:
- Abstract thinking - the ability to notice/extract abstractions, name them precisely, stack them on top of each other in a way that hides complexity, and last but not least - work with them longer-term.
- Writing things down - precisely, concisely, following a logical structure in a way understandable from the reader's (not just the writer's) perspective
The majority of software engineers have some challenges with at least one of them.

Abstract thinking
Software engineers are typically smart people who have been taught a bunch of abstractions in the initial stages of their careers: mathematics, statistics, GoF design patterns, EAI patterns, DDD constructs, FP/category theory patterns, relational data modelling patterns, etc. But we're poorly/not trained to formulate and share new models. Even the notations that would simplify that (like UML, BPMN, ERD diagrams) are now considered passe, overkill, or too "expensive". The code (& programming languages) is the only abstraction we commonly use and understand.
As a result:
- We generally suck at naming things.
- We spend a massive amount of time reverse-engineering code to ephemeral models that exist (temporarily) just in our heads.
- We don't share those ephemeral models; everyone (re-)builds their own individually & independently.
- We're not used to the inconvenience of accepting and adopting someone else's mental model - we always work with our own.
But without abstractions, how can we succinctly and effectively explain our stuff to LLMs?
Writing things down
Let's face it - communicating clearly in writing is hard and requires a certain "muscle". Natural talent is not enough - one has to train for that, and the proper guidance (/feedback) is not optional. We're bad at writing because we do so little of it (in our daily work). Many engineers avoid writing to such an extent that they even ask their PMs/Leaders to write their work items in systems like Jira or Linear.
LLM providers are not unaware of that, and that's why model guidelines are full of explanations of such fundamental communication patterns like:
- one-shot, multi-shot
- chain-of-thought
- Minto's pyramid principle
- PEEL, AIDA, BRIEF, and many more
Sorry to say that so bluntly, but an average software engineer:
- Fails to make a concise point / point out an underlying problem (without getting lost in symptoms and irrelevant pieces of context)
- Says what (s)he knows instead of what would make the conversation progress
- Can't craft a TLDR or simply summarize things effectively
- Is not able to structure a document in a way that follows some underlying idea
- Sucks when it comes to making a (sales) pitch - convince someone else to her/his idea by crafting an simple doc (when status, charisma, volume of voice or crude boldness mean less than in conversation)
- Ain't able to put her/himself in the boots of the reader - starting with their concerns/questions/problems
- Does poorly when forced to use such basic tools of expression as: examples, analogies/contrasts, stories/anecdotes, metaphors, (rhetorical) questions
Oh, the irony
Long story short, many developers under-utilize the potential of Gen AI (e.g., by avoiding its usage) simply because it's ... too human when it comes to interaction "expectations". They find it hard to clarify their intent in English, or it simply takes them too much time to do that. They find NL too imprecise, too vague, and too broad to interpret, which apparently discourages them from trying. And so they end up in the middle of their comfort zone - IDE and code they craft manually ...
On top of that, they neglect/omit the fact that these shortcomings can be addressed with some deliberate effort (which means, YES, more writing):
- Proactively documenting projects' dictionaries, assumptions, constraints, invariants, etc.
- Preparing examples & templates that one could always refer to
- Meaningful comments (in code/PRs/readme-s) that actually clarify the goal and intent, instead of obvious facts about the code constructs used ...
There's a bit of irony in the fact that nearly every organization I know struggled with such basic things as:
- On-boarding materials for new-joiners
- Up-to-date, complete-enough documentation of key domain models
- Writing RFCs, ADRs, or proper Post Mortems
- Incrementally built Knowledge Base for troubleshooting/maintenance/incident-handling
And nearly none of them had a good idea to address those deficiencies. Bah, even if they had one, the execution was failing without exception.
Will the rise of new coding tools (and the ROI promise related to them) force such organizations to (finally) do for machines (LLMs) something they could have never been bothered to do for their human colleagues?