Maintaining Human Agency & Autonomy
As AI systems become more capable and more embedded in workflows, there's a genuine risk that humans gradually lose meaningful control over their own work and decisions. This isn't about dramatic robot takeover scenarios - it's about the slow erosion of agency that happens when AI increasingly shapes what you see, what options you consider, and what decisions seem natural. When an AI system pre-filters information, it decides what's relevant. When it generates recommendations, it frames the decision space. When it handles routine tasks automatically, it defines what counts as routine. Over time, the human role can shrink from "decision-maker assisted by AI" to "supervisor who approves AI decisions" to "person whose presence is required for compliance but whose input doesn't meaningfully affect outcomes." Preserving genuine human agency requires active design choices: ensuring people can access raw information, not just AI-filtered summaries; creating spaces for human judgement that aren't pre-empted by AI recommendations; maintaining skills and knowledge that enable independent action; and regularly questioning whether the human role in a workflow is substantive or ceremonial.