Compute & Hardware
AI's recent breakthroughs wouldn't have been possible without a massive increase in available computing power. Training a large language model requires thousands of specialised processors running for weeks or months, consuming megawatts of electricity. Even running a trained model to answer a single query requires significantly more compute than a traditional web search. This appetite for processing power has reshaped the semiconductor industry, created new supply chain pressures, and made hardware access a strategic concern for governments and corporations alike. The compute landscape is dominated by a small number of companies - most notably NVIDIA - but competition is intensifying as cloud providers, startups, and nation-states invest in alternative chip designs. The cost of compute is one of the biggest practical constraints on AI development, influencing everything from which organisations can afford to train frontier models to whether an AI feature is economically viable in a consumer product. Understanding the hardware layer isn't just for engineers - it shapes what's possible, what it costs, and who has the power to build and deploy AI at scale.