Doomsayers were claiming software developers would soon get replaced by Artificial Intelligence. Other "visionaries" predicted the inevitable rise of the generation of no-code/low-code tools that will make possible the development of various applications in natural language and/or in an entirely visual way - practically turning every willing person into a capable developer.
None of these things happened. And I daresay they won't (in the foreseeable future), but the way software engineers used to work is indeed evolving.
What is commonly referred to as Artificial Intelligence (while, in fact, it's just a smart application of so-called Machine Learning) silently crept into our everyday toolsets, bringing both: tangible benefits and dangerous side effects. The former list is rather obvious:
- real-time code completion assistants not only speed up development but also can adjust to our personal style and (coherent) conventions we use
- AI-powered automated testing tools are able to identify edge cases, detect anomalies, and can (of course) run indefinitely w/o any fatigue, learning from actual users' activity (supervised learning style)
- there are more and more AI-powered code generators that (sufficiently trained, which is not an issue anymore) become surprisingly good at solving generic (yet rather atomic) problems - which, let's be frank, is at least 90% of what we do while coding
- last but not least - ML tackles our operational concerns: predicts problems (disruptions, outages) and identifies their root causes, estimates inefficiencies (e.g., recommends changes in provisioned configuration), spots performance or security risks in the code before it even gets deployed
And these are NOT unproven prototypes. I know and use tools from 3/4 of these categories on a daily basis. That's how I've learned how addictive they are :) Nevertheless, they DO speed things up, and the engineers' time is the most precious resource Internet companies have at their disposal these days.
However, there are risks associated, and they should be mentioned explicitly. The hype is hype, but let's stay real - even if we accept a particular risk, let's make sure it's adequately controlled and monitored.
- First of all, you need to know what data are the AI model(s) based upon. Some generic, shared data? Or your data - that you've graciously agreed to give access to?
- Trivial, repetitive things should indeed be automated (e.g., generated), but: relying on automata means potential atrophy of fundamental skills (which are supposed to be foundation for higher-level skills ...); automatically generated code (as something not created with full attention) quickly becomes unfamiliar and troublesome for maintenance (and AFAIK, we don't have bots for that yet ...)
- AI (as we know it these days) does not think. It's just advanced statistics based on large volumes of historical data, correlations of variables, and other regularities. Relying on such a mechanism exclusively means replacing any creativity (aka out-of-the-box thinking) with a sophisticated copy'n'paste.
- The future generations of developers are quite likely to be taught, led and "supervised" (code-wise) by trained models, instead of thought leaders and successful engineer mentors. That doesn't sounds 100% right ...
I believe my point should be pretty clear at this point. The AI-powered software development tools make a lot of sense in AIDING development efforts (as an auxiliary mechanism) but should not get actual human engineers' attention entirely out of the equation. It's dangerous to think about them as an equivalent of an auto-pilot, I'd rather suggest treating them as trusted advisors.
A sidekick should be treated as a sidekick. Useful, convenient, but treated with an appropriate mix of confidence, trust, and caution. Human engineers should still be 100% in charge, taking full accountability for the products they build.