Federated Learning
Federated learning is an approach where a model is trained across multiple devices or organisations without the raw data ever leaving its source. Instead of collecting all data in one place, the model is sent to where the data lives, trained locally, and only the model updates - not the data - are shared and aggregated. Google popularised this approach for improving the keyboard prediction on Android phones. Each phone trains on the user's typing patterns locally, sends compressed model updates to a central server, and receives an improved model back. The raw text never leaves the device. The appeal is obvious for privacy-sensitive applications - healthcare organisations can collaborate on AI models without sharing patient records, financial institutions can improve fraud detection without pooling transaction data, and mobile apps can personalise without centralising user behaviour data. The challenges include communication overhead (sending model updates across networks), handling devices with different data distributions and availability patterns, and ensuring that the aggregated model updates don't inadvertently leak information about individual participants. Federated learning is often combined with differential privacy and secure aggregation to strengthen privacy guarantees. Adoption is growing, particularly in healthcare and finance, but the operational complexity of running a federated learning system is significantly higher than traditional centralised training. For most organisations, it's worth considering when the privacy or regulatory benefits clearly outweigh the added complexity.