How People Perceive AI

People don't interact with AI as a blank slate. We bring expectations, assumptions, and deeply human tendencies to every encounter. We anthropomorphise - attributing feelings, intentions, and understanding to systems that have none of these things. We form mental models of how AI works that are often wrong, imagining it "looks things up" or "remembers" conversations when it does neither. We can become emotionally attached to chatbots, or dismiss capable tools entirely because of a single bad experience. These aren't quirks to be dismissed - they fundamentally shape how AI gets used, misused, or abandoned in practice. A product that works perfectly on a technical level can still fail because people don't understand what it's doing, trust it in the wrong situations, or expect things it was never designed to deliver. How people actually perceive AI, rather than how we assume they do, matters enormously for anyone building, buying, or deploying these systems.