Anthropomorphism & the Eliza Effect

Humans have a deep-seated tendency to see human qualities in non-human things - we name our cars, talk to our pets as though they understand, and feel genuine warmth toward cartoon characters. AI triggers this instinct powerfully. When a chatbot responds with fluent, empathetic-sounding language, it's almost impossible not to feel that something is "understanding" you. This tendency is called anthropomorphism, and the specific version triggered by conversational AI is known as the Eliza effect, named after a simple 1960s chatbot whose users became emotionally attached despite knowing it followed basic pattern-matching rules. For businesses, this matters because anthropomorphism distorts how people evaluate AI. Users may trust systems more than warranted, feel betrayed when mistakes happen, or resist switching tools they've formed an attachment to. Teams designing or deploying AI need to recognise this tendency and make deliberate choices about how human-like their systems should feel - acknowledging the effect rather than either exploiting or ignoring it.