Emotional Responses to AI Errors

When AI gets something wrong, the emotional response often outweighs the rational significance of the error. A chatbot that gives a bizarre answer becomes a screenshot shared across the organisation, undermining confidence in the entire system - even if it's accurate 95% of the time. This asymmetry exists because AI errors feel different from human errors. When a colleague makes a mistake, you contextualise it: they were tired, the question was hard, they usually get it right. When AI makes a mistake, it triggers a category-level judgement: "this technology doesn't work." The emotional intensity scales with the stakes involved and the degree of trust that preceded the error. Someone who was enthusiastic about an AI tool feels personally let down when it fails, and that sense of betrayal is hard to reverse with statistics about average performance. Organisations need to anticipate emotional responses to AI errors and plan for them - not by hiding mistakes, but by establishing realistic expectations upfront, creating channels for reporting and discussing errors constructively, and demonstrating that errors lead to improvements rather than being swept under the carpet.