AI Regulation (EU AI Act, US, China & Beyond)
Governments worldwide are writing the rules for AI, but they're taking very different approaches. The EU's AI Act is the most comprehensive framework to date: it classifies AI systems by risk level, banning some uses outright (like social scoring and certain real-time surveillance) and imposing strict requirements on high-risk applications in hiring, healthcare, law enforcement, and critical infrastructure. Providers of high-risk systems must conduct conformity assessments, maintain technical documentation, and ensure human oversight. The US has favoured a lighter touch - sector-specific guidance, executive orders, and voluntary commitments rather than a single overarching law, though state-level legislation is accelerating. China has introduced targeted regulations on algorithmic recommendation, deepfakes, and generative AI, with a focus on content control and platform accountability. For businesses operating internationally, this creates genuine complexity: a system that's compliant in one jurisdiction may not be in another. Understanding where regulation is heading - not just where it is today - helps organisations build compliance into their AI development process from the start rather than retrofitting it later.