I’ve moved my newsletter to Substack due to Kit increasing its pricing to $390/year, which I’m not willing to pay. Nothing changes for you. All my writing is still on my blog; Substack is only for Mediations.
The more I use AI, the more I realize that cognitive biases have a significant influence on my decision-making process. I unconsciously rely on Al’s output whenever I task it to do something and use the result as confirmation of my assumptions, thoughts, and ideas. The problem is that I have flaws, and neither my assumptions nor my ideas are always accurate.
The problem is also apparent in the topics on which I consider myself knowledgeable. When I am using AI to think about an issue I’m facing at work (e.g., product or software problems) or debate on a topic, I don’t recognize AI’s mistakes because I lower my guard quickly. I trust it too much.
I was talking with one of my colleagues, who is a good and experienced engineer. They were telling me about a problem Al introduced in the codebase while they were working on a new feature. They lost a day and a half to fix the problem.
They didn’t initially catch the issue because it could only occur in edge cases. While guiding the AI, they had a flaw in their thinking and prompts, which the AI used it to build the problem into the system. Yet, the issue was an obvious one in hindsight.
They used AI to refactor the codebase a few days earlier, and the AI did a remarkable job. The reliable result of AI output biased their judgment on the next task, and they overlooked an obvious problem, leading to major rework. My colleague expected (unintentionally) AI to do a good job, and they trusted the result (amplifying confirmation bias).
My colleague is not a novice Al user. They are using AI every day in a variety of tasks, which I think also causes problems. The more AI tools get to know the person, the more the person trusts the result, the less they recognize the mistakes.
The mutual confirmation loop reinforces confirmation bias, causing the person to overlook errors. If the person is working in a domain with high stakes, such as finance or healthcare, these situations can cause major problems and cost a fortune to recover from.
While my colleague and I are knowledgeable in a field where we use AI, imagine juniors or novices who can’t even detect the issue due to a lack of knowledge. As they lack sufficient experience and rely more on AI to perform tasks (instead of learning themselves), building a reliable and safe system becomes more challenging.
I know that these bugs can still occur even without Al. Yes. They will continue to happen, whether AI-generated or not. That’s why I consider AI tools as assistive to humans, not a decisive authority in itself. They supplement, not replace, humans.
We still need to educate ourselves, learn the fundamentals of the craft, as well as recognize our cognitive biases. We need to nudge ourselves to challenge both our own results and those of AI. We still need mechanisms such as peer reviews, cross-validation (e.g., four eyes principle), and adversarial and critical testing.
So, next time when you use AI, be aware that just because it produced a correct result before, it doesn’t mean it’ll do it the next time.
Good to Great
Good: The dawn of the post-literate society and the end of civilisation. Screens and falling literacy are eroding deep reading and critical thinking, threatening innovation, democratic debate and the future of our civilization. I still can’t believe how little we read…
Better: Written in 1899. A Message to Garcia is a wonderful piece. I added “carry a message to Garcia” to my dictionary.
Great: “Man cannot stand a meaningless life.” 38 mins with Carl Jung, the founder of analytical psychology, two years before his death. (On a side note, the interview style is very different than today’s interviews.)
Recently I wrote
I didn’t publish anything. I’m working on a few pieces, but couldn’t polish them enough to share here. Next time!
Until next time,
Candost
P.S. I don’t know how many times I mentioned this blog post in real life.