Training & Optimisation
Training is where an AI model actually learns - the process of adjusting millions or billions of internal parameters until the model produces useful outputs. It's also where most of the cost, time, and energy goes. Training a frontier model can cost tens or hundreds of millions of pounds in computing resources alone, take months, and consume as much energy as a small town. Understanding the basics of how training works helps you appreciate both why these models are as capable as they are and why they have the specific limitations they do. The core loop is deceptively simple: show the model some data, compare its output to the desired result, calculate how wrong it was, and nudge its parameters slightly in a better direction. Repeat this billions of times. The art and science lie in the details - how you measure "wrong," how you nudge the parameters, how you prevent the model from memorising examples rather than learning general patterns, and how you distribute this enormous computation across thousands of processors. These choices during training determine what the model is good at, what it struggles with, and where it might fail unexpectedly.