The Right to Human Decision-Making

When an AI system denies your loan application, flags you for additional security screening, or recommends against your parole, do you have the right to have a human review that decision? This question is at the heart of a growing legal and ethical debate. The EU's GDPR includes provisions around automated decision-making, giving individuals the right not to be subject to decisions based solely on automated processing that significantly affect them. The AI Act adds further requirements for human oversight of high-risk systems. But the practical reality often falls short. "Human in the loop" can mean a human rubber-stamping algorithmic recommendations without meaningful review - what researchers call "automation bias." If a human reviewer sees the AI's recommendation before making their decision, they're heavily influenced by it. True human oversight requires the time, training, and authority to genuinely evaluate and override automated decisions. For businesses deploying AI in consequential decision-making, the right to human decision-making isn't just a legal checkbox. It requires designing systems where human review is meaningful, where reviewers have the information and authority to disagree with the algorithm, and where the process is accessible to the people affected by the decision.