When we consider the changes (to our daily lives, jobs, and entire society) inflicted by Gen AI, we typically focus on the audacious claims of Gen AI-powered tool creators. "This will disrupt X, Y is so going down, jobs in Z are becoming history as we speak." In many cases, it's pure marketing & wishful thinking backed up by a desire for increased publicity. The real changes (that indeed happen!) are much more "organic," gradual, and ... spontaneous (not driven by someone's commercial agenda) - that's why we frequently notice them with a significant delay.

Here's one example that came to my mind:

Context: Knowledge workers, especially the more tech-savvy ones (programmers, system analysts, data engineers, etc.), are among the early adopters of Gen AI. That is not only because they are used to modern, often unpolished technologies. Usually, they use various DSLs (Domain-Specific Languages, like programming languages or formal notations/models) in their daily work, and LLMs truly shine in such scenarios.

Question: These groups are used to test/prove their capabilities with various certifications (e.g., AWS Solutions Architect Associate/Professional, Certified Information Systems Security Professional, TOGAF, Cisco CCNA). However, skilfully fine-tuned/augmented LLMs can easily pass the exams for such certifications — and they are available to practically everyone. What impact will this have on the attitude of knowledge workers towards said certifications?

Hypothesis: I bet that such expertise (memorizable, based on a bounded and relatively stable body of knowledge) has become a commodity and hence can be fully externalized (as a tool/service/assistant). One does not have to "load" it into one's brain anymore — as long as it can be used quickly, easily, and confidently (enough).


"Quickly" and "easily" do not seem controversial at all. The trickiest of these three adverbs is definitely "confidently." What kind of confidence are we talking about in the case of probability-based, habitually hallucinating, and nearly wholly non-transparent (in terms of data input & training process) LLMs? Well, realistically speaking, for such a tool to be helpful, it does not have to be 100% credible & 100% correct. We don't need it to replace humans but to do the tedious "heavy lifting" for us, so the starting point is an AI-generated "draft." And then, our duty is to validate, double-check, correct, polish, and finalize. The machine does >80% of the effort, while the human does the sanity check & provides the finishing touches.

Anyway, back to the forspoken attitude toward certifications - will people see any point in memorizing ...

  • library/framework syntax & idiomatic constructs
  • endpoint API structure & nuances (e.g., behavioral coupling of "hidden" state machines)
  • all the modes, options, and variants for cloud service provisioning
  • etc.

I don't think so. Neither will they identify any value in proving to anyone they have memorized that stuff. In other words, such certification has completely lost its raison d'être.


It doesn't mean that all the certifications will be gone. There'll always be a need to prove specific capabilities, but the list of these desired capabilities is about to change completely. Working backward from employers' expectations, we're likely to be soon tested for:

  1. practical classification and decomposition of problems (based on symptoms, signals & scarce contextual information)
  2. practical solving of real problems at a higher level of abstraction (using lower-level tools, potentially even Gen AI-powered ones) - the CKAD style, but 5x more so 😄
  3. universal mental models instead of particular implementations (e.g., instead of Apache Kafka familiarity, conceptual differences between message queues, event streams, and transaction logs; or fundamental properties of message-based communication: message ordering, at-least-once/exactly-once/at-most-once processing, etc.)

The education of future generations of knowledge workers (and not only) will change drastically. We'll be (hopefully) much better at asking good (and context-specific) questions aimed at understanding the problems we're trying to solve. Otherwise, how can we instruct our LLMs?

We'll also have to improve when it comes to critical assessments (of what LLMs produce for us) - setting up clear success criteria, asking probing questions, and identifying/validating all relevant architectural properties (and their respective desired quality levels). That IS a skill as well.

I expect a significant shift in the ratio between "traditional" coding and "indirect" coding time. A lot of creating/refactoring work will now involve setting declarative rules, asking questions about the code ("querying" code at scale — e.g., while looking for clumps of coupling), and "ordering" cross-file/-routine/-module changes ("make sure that in every place where the retail place is calculated, the volume-based commission is updated").

YES, that means that the chat window will be a crucial component of every IDE (which is already happening, by the way). Interesting times ahead.

Share this post