Mediations #28: Evolving Software Engineering with AI
Is the definition of software engineering changing with AI?
I’ve been thinking about how the definition of software engineering has been changing with AI, as everyone is expected to do more.
An engineering manager used to be the person focused on the team’s strategy, developing people’s skills, hiring, project management, etc.. They were deep into meetings and were the person who connected the dots across product, design, and engineering without actively coding on a project, but still reviewing the written code from time to time. Now, they also need to code themselves.
A senior engineer who used to focus on bringing a somewhat ambiguous project or feature to completion now needs to lead a larger project together with fewer team members. A junior engineer who used to take smaller tasks is now expected to handle larger tasks (sometimes up to the previous senior level).
What is behind all this stretch is the AI. The capacity AI has “freed up” needs to be utilized. Instead of leveraging the freed-up capacity to innovate, many organizations are squeezing more out of their people to increase efficiency.
I don’t see anything new in the world when I look at the overall scenario. Every new tool that can affect the team’s productivity is always sought after. All these efficiency efforts must eventually translate into money (that’s where the current debate over AI is). The first step was always to exhaust existing capabilities while organizations tried to determine where and how to utilize the new capacity. We’re seeing the same with AI usage in software engineering. But what can we, as people who have been exhausted, navigate this change?
Although definitions of roles and titles vary across organizations, the definition of software engineering doesn’t. That’s where I believe Dave Farley’s approach still holds and is sound. **Dave says that software engineers have to be experts at learning and managing complexity.**
Becoming an expert at learning has become much easier thanks to AI. AI can provide direct feedback on code, writing, system design, and more (although not 100% accurate; but hey, humans can’t provide 100% precise feedback either) before asking others to share feedback. With fast feedback cycles, we can increase efficiency and produce higher-quality work, which, in turn, reduces the effort others must put into sharing feedback. However, we can’t say the same about being experts at managing complexity—the other half of the heart of the craft.
The more AI-produced code tries to make its way to production, the more we need to get better at managing complexity. Any codebase, including its structure, architecture, and patterns, must be consistent and simple so that AI—as well as humans—can easily follow it (with accurate instructions). If the AI-generated code increases the complexity, we need to know how to simplify it.
Moreover, software engineering these days is less about writing code and more about reviewing the code that AI tools produce. Getting better at code review is another skill than writing the code, and it needs training. Maintaining critical judgment over AI results is as crucial as before.
That’s why engineers must understand the fundamentals of software engineering (principles such as SOLID, modularity, and separation of concerns), architectural capabilities (such as maintainability, extensibility, and reliability) and general concepts (such as observability and monitoring). Without understanding the fundamentals of the craft, we can neither leverage AI effectively nor stay up to date with ever-evolving software engineering.
As organizations extract efficiency with AI, the engineers who thrive will be those who double down on fundamentals to manage that complexity. Our roles are stretching, but software engineering’s definition remains constant: expertise in learning and managing complexity. Once we master that, AI can actually amplify rather than exhaust us. As Martin Fowler said, “Good programmers write code that humans can understand.” In the age of AI, good programmers ensure AI writes code that humans (and AI) can understand, maintain, and evolve.
Good to Great
Good: Beliefs of Lee Robinson. Although I believe one or two things differently, it’s a good list.
Better: Thank you for being annoying (also inspired one of my entries below).
Great: Nothing great I found this week.
I wrote about
- As an engineer, you have more power than you think
- I added 10 new notes to my Zettelkasten.
P.S. The Mexico F1 GP was one of the best of this season. It was super fun!
