Few-Shot & Zero-Shot Learning
Few-shot and zero-shot learning represent a genuine shift in what you can do with AI without extensive preparation. In traditional machine learning, you need hundreds or thousands of examples to train a model for a new task. Few-shot learning means a model can learn from just a handful of examples - sometimes as few as two or three. Zero-shot learning goes further: the model performs a task it has never been explicitly trained on, relying on its general knowledge to figure out what you want. If you ask a large language model to classify customer complaints into categories it has never seen before, simply by describing those categories, that is zero-shot learning. If you give it three examples first, that is few-shot learning. These capabilities emerge primarily from very large pre-trained models that have absorbed enough general knowledge to generalise across tasks. The practical implications are significant: you no longer necessarily need to build a bespoke model for every new use case, and you can prototype solutions rapidly, often getting serviceable results without any training at all. The trade-off is reliability: few-shot and zero-shot performance is typically less consistent than a well-trained dedicated model. For high-stakes applications where accuracy matters enormously, you will usually still want to invest in proper training data and fine-tuning.