Automation Bias
When a computer suggests an answer, people tend to accept it - even when their own judgement or available evidence points in a different direction. This tendency is called automation bias, and it's one of the most well-documented risks in human-AI interaction. It's not stupidity; it's a rational-seeming shortcut. Computers process more data than we can, so deferring to them feels sensible. The problem is that AI systems can be confidently wrong in ways that would be obvious if you actually looked at the underlying information. A doctor who would catch a misdiagnosis from a colleague might accept the same wrong answer from a clinical decision support tool without question. Automation bias is strongest when people are tired, rushed, or dealing with information overload - precisely the conditions under which AI tools are most often deployed. Combating it requires more than telling people to "stay vigilant." It means designing systems that actively prompt critical evaluation rather than encouraging passive acceptance.