Undertrust & Algorithmic Aversion

On the other side of the spectrum, some people refuse to use AI tools even when those tools demonstrably outperform human judgement. Algorithmic aversion - the tendency to abandon or avoid algorithms after seeing them make even a single mistake - is remarkably common. People hold AI to a different standard than they hold humans. A human expert who gets things wrong occasionally is seen as normal and forgivable. An algorithm that makes the same mistake is seen as fundamentally broken. This asymmetry means that perfectly good AI tools get shelved after a bad first impression, or never adopted because someone heard about a failure elsewhere. Undertrust is costly in its own way: organisations invest in capable systems that gather dust because the people who should use them don't believe in them. Addressing algorithmic aversion means giving users some control - research consistently shows that people trust algorithms more when they can adjust or override them, even slightly. It also means being honest about errors upfront rather than letting users discover them in high-stakes moments.