Deepfakes & Synthetic Media
Deepfakes - AI-generated video or audio that convincingly depicts real people saying or doing things they never did - have moved from novelty to genuine threat. The technology has become accessible enough that creating a convincing deepfake no longer requires significant technical expertise or resources. A few seconds of someone's voice is enough to clone it convincingly. A handful of photos can generate realistic video of a person's face. The most immediate harms are personal: non-consensual intimate imagery (overwhelmingly targeting women), identity fraud, and reputational attacks. In the public sphere, deepfakes of political figures have appeared in elections worldwide, and audio deepfakes have been used in business fraud - a CFO's voice cloned from public recordings to authorise wire transfers. Legal frameworks are playing catch-up. Several jurisdictions have introduced or strengthened laws against non-consensual deepfakes, and regulations requiring disclosure of synthetic content are expanding. Technical countermeasures include watermarking generated content, cryptographic provenance tracking, and detection models - though the detection arms race favours generators. For organisations, deepfake risk is now a standard consideration in fraud prevention, communications strategy, and executive protection. Verifying the authenticity of audio and video communications is becoming as important as verifying email.