Security & Privacy
AI systems introduce novel security and privacy challenges that traditional cybersecurity approaches don't fully address. A conventional application processes data according to explicit rules; an AI system has learned its behaviour from data in ways that are difficult to fully characterise or predict. This creates new attack surfaces and new categories of vulnerability. Models can be tricked into producing harmful outputs through carefully crafted inputs. Training data can be extracted or inferred from model behaviour. The supply chain of pretrained models, datasets, and libraries introduces dependencies that are harder to audit than traditional software. At the same time, AI is also a powerful tool for attackers - generating convincing phishing emails, automating vulnerability discovery, and creating deepfakes. The security community is still developing frameworks and best practices for AI-specific threats, and the field is evolving quickly as both attackers and defenders adapt. For organisations deploying AI, the key message is that AI security requires specific attention beyond your existing security posture. The threats are different, the mitigations are different, and the expertise required is specialised.