Human Oversight Structures
Beyond individual human-in-the-loop interactions, organisations need broader oversight structures for their AI systems. This means defining who's responsible for monitoring system performance, who reviews edge cases, who decides when to update or retrain models, and who investigates when things go wrong. These aren't just technical roles - they require a blend of domain expertise, technical understanding, and organisational authority. Oversight structures need to match the risk level of what the AI is doing. A recommendation engine for blog posts might need only periodic performance review. An AI system influencing medical or financial decisions needs continuous monitoring, clear escalation paths, and dedicated oversight personnel. The challenge is that many organisations deploy AI without building these structures, treating oversight as someone's additional responsibility rather than a primary role. When everyone is partly responsible for AI oversight, nobody is effectively responsible. Good oversight structures are explicit, resourced, and empowered - the people monitoring AI systems need the authority to pause or modify them when problems emerge, not just the ability to file a report that goes into a queue.