Building Appropriate Reliance

Appropriate reliance isn't a fixed state - it's an ongoing negotiation between human judgement and system capability. Building it requires deliberate effort across multiple fronts. First, transparency: people need honest information about what a system can and can't do, presented in terms they actually understand. Second, experience: structured exposure to both successes and failures helps users develop realistic mental models. Third, control: people trust systems more - and more appropriately - when they feel they can intervene, override, or adjust. Fourth, accountability: someone needs to remain responsible for outcomes, which means designing workflows where AI assists decisions rather than making them unilaterally. Fifth, feedback: users need to know when the system was right and when it was wrong, so their trust can update based on evidence rather than assumptions. None of this happens automatically. Left to their own devices, people will settle into whatever trust level feels comfortable, which is rarely the level that produces the best outcomes. Appropriate reliance is a design objective, not a natural equilibrium.