Compute Access & Democratisation

The enormous compute requirements of frontier AI create a natural concentration of power among well-funded companies and institutions. Training a state-of-the-art large language model is simply not possible for most organisations, universities, or governments. This raises important questions about who gets to shape AI development and who gets left behind. Several initiatives aim to broaden access. Cloud providers offer GPU instances that let organisations rent rather than buy hardware. Programmes like Google's TPU Research Cloud provide free compute to academic researchers. Government-funded supercomputing centres are increasingly allocating capacity for AI research. And the open-source movement means that while you might not be able to afford to train a frontier model, you can often fine-tune or deploy one that someone else has trained. Still, the gap between the compute haves and have-nots is significant and growing. The largest AI labs spend billions annually on compute. A typical university research group has a tiny fraction of that budget. This asymmetry affects what research gets done, which problems get addressed, and whose perspectives influence AI development. Efforts to democratise compute access are valuable but face a fundamental economic challenge - these resources are genuinely expensive to provide, and demand consistently outstrips supply.