Anchoring & Framing Effects

The first piece of information you encounter on a topic disproportionately shapes your subsequent judgement. This is anchoring, and AI outputs are extraordinarily effective anchors. When an AI system suggests a price, an estimate, a risk score, or a recommendation, that number or suggestion becomes the starting point around which all further thinking orbits - even if the AI's output was essentially arbitrary. Framing works similarly: how AI presents information changes how you evaluate it. The same data framed as "90% success rate" feels very different from "10% failure rate," and AI systems make framing choices constantly in how they structure their responses. For decision-makers, this means the order in which AI presents options matters, the specificity of its suggestions matters, and whether it leads with positives or negatives matters - often more than the underlying quality of the analysis. Being aware of these effects is a start, but awareness alone isn't enough. Practical defences include generating your own estimates before consulting AI, asking for information in multiple framings, and treating AI-generated numbers as hypotheses to be tested rather than facts to be built upon.