Liability Frameworks for AI Harm
When an AI system causes harm - a misdiagnosis, a discriminatory lending decision, an autonomous vehicle accident - who is responsible? The developer who built the model, the company that deployed it, the user who relied on its output, or the data providers whose information shaped its behaviour? Existing legal frameworks were not designed for this question, and they are struggling to provide clear answers. Product liability law traditionally applies to physical goods with identifiable defects. Negligence law requires demonstrating that someone failed to exercise reasonable care. Neither maps neatly onto a system that learns from data, behaves probabilistically, and may produce harmful outputs in ways that no one - including its creators - can fully predict or explain. Different jurisdictions are taking different approaches. The EU's proposed AI Liability Directive would shift the burden of proof towards AI providers, making it easier for those harmed to claim compensation. The UK is taking a more sector-specific approach, adapting existing regulatory frameworks rather than creating new AI-specific liability rules. For your organisation, the practical implications are significant: you need to understand who bears responsibility for your AI systems' outputs, ensure you have appropriate insurance, and maintain documentation that demonstrates you exercised reasonable care in development and deployment.