Data Centre Efficiency & Cooling
Data centres have become significantly more efficient over the past decade, but AI workloads are pushing them in new directions. Traditional data centres, running mostly CPUs for web hosting and general computing, have benefited from advances in cooling, power distribution, and server design. The metric PUE (Power Usage Effectiveness) - the ratio of total facility power to IT equipment power - has dropped from around 2.0 in the early 2000s to below 1.2 in the most efficient modern facilities. AI workloads are different. GPUs generate far more heat per unit of space than CPUs, creating intense localised heat that traditional air cooling struggles to handle. This has driven rapid adoption of liquid cooling - piping coolant directly to GPU modules or immersing entire servers in non-conductive fluid. NVIDIA's latest data centre GPUs are designed for liquid cooling, and new data centres built for AI workloads typically incorporate it from the ground up. The location of data centres matters too. Cooler climates reduce cooling costs. Access to renewable energy reduces carbon emissions. Proximity to water sources enables certain cooling approaches but raises questions about water consumption. The trend toward hyperscale AI data centres - massive facilities purpose-built for GPU-intensive workloads - is reshaping where and how these facilities are built, with implications for local communities, power grids, and water resources.