The Right to Explanation
Several regulations, most notably the EU's GDPR and the AI Act, establish some form of right for individuals to receive an explanation when they're subject to automated decision-making. The practical meaning of this right is still being worked out. Does "explanation" mean a general description of how the system works, or a specific account of why this particular decision was made about this particular person? How detailed does it need to be? Does it need to be technically accurate, or just comprehensible? These questions matter because they determine whether the right to explanation is a meaningful protection or a box-ticking exercise. A generic statement like "your application was assessed by our AI system based on multiple factors" technically provides an explanation but doesn't help anyone understand or challenge the decision. Meanwhile, providing a truly specific explanation risks revealing proprietary model details or gaming the system. Organisations subject to these requirements need to think seriously about what meaningful explanation looks like for their specific use cases, rather than waiting for regulators to define it for them.