Liability for AI-Assisted Decisions
When an AI system contributes to a decision that causes harm - a wrong medical recommendation, a biassed hiring decision, an incorrect financial assessment - the question of who's responsible is genuinely complex. Is it the organisation that deployed the AI? The vendor that built it? The human who relied on its output? The answer depends on the specific circumstances, the jurisdiction, and the contractual arrangements between the parties, but the direction of travel in most legal systems is that the organisation deploying AI bears primary responsibility for the outcomes, even when the AI was provided by a third party. This means you can't outsource accountability by buying AI from a vendor - you're still responsible for how it's used and what it produces. Practical implications include ensuring human oversight for high-stakes decisions, documenting why and how AI was involved in decisions, maintaining the ability to explain AI-influenced decisions to affected parties, carrying appropriate insurance, and ensuring your vendor contracts include meaningful indemnification. The standard of care expected will likely increase over time as AI best practices become more established, so building good governance now is an investment in future legal defensibility.