Confirmation Bias & AI
Confirmation bias - the tendency to seek out and favour information that supports what you already believe - is one of the most pervasive human cognitive patterns. AI tools can supercharge it. Ask a large language model to build a case for something you already think, and it will do so convincingly, complete with plausible-sounding evidence. Ask it for a balanced view, and you'll likely give more weight to the parts that align with your existing position. The danger is that AI creates the illusion of having done thorough research when you've actually just found a more sophisticated way to confirm your priors. This plays out in business constantly: a leader who believes a market opportunity exists asks an AI tool to analyse it, interprets supportive outputs as validation, and filters out caveats. The AI didn't create the bias, but it provided the raw material to feed it. Counteracting this requires discipline - actively asking AI to argue against your position, seeking disconfirming evidence, and treating AI-generated analysis with the same scepticism you'd apply to a consultant who's telling you what you want to hear.