AI Risk Management Frameworks

AI risk management extends traditional risk frameworks to cover the distinctive ways AI systems can fail or cause harm. These include model performance degradation over time, biassed outputs that discriminate against particular groups, security vulnerabilities specific to AI (like adversarial attacks or data poisoning), over-reliance on AI for decisions that need human judgement, and reputational damage from AI that behaves in unexpected ways. A good AI risk framework categorises these risks, assesses their likelihood and potential impact for each use case, and defines proportionate mitigation measures. Not every AI application carries the same risk - a system that recommends blog posts needs lighter governance than one that influences lending decisions. Risk assessment should happen before deployment and continue throughout the system's life, because AI risks change as models drift, user behaviour evolves, and the operating environment shifts. The framework should also define clear escalation paths for when things go wrong, because AI incidents often require rapid response that doesn't fit neatly into standard incident management processes.