Overtrust

Overtrust goes beyond automation bias into territory where people attribute capabilities to AI that it simply doesn't have. You see this when someone uses a large language model to check legal contracts, trusting it to catch every clause that matters, or when a team relies on AI-generated financial projections without verifying the underlying assumptions. Overtrust is often fuelled by impressive demonstrations - if a tool can write a convincing essay, surely it can also reason about complex strategy? The reality is that surface-level fluency tells you almost nothing about underlying accuracy. Overtrust is especially dangerous because it tends to be invisible until something goes wrong. Nobody notices they're over-relying on a tool until it produces a costly error. By then, the processes and skills that would have caught the problem have often atrophied. The antidote isn't distrust - it's structured verification, clear communication about what the system can and cannot do, and a culture where checking AI outputs is standard practice rather than an afterthought.