Bias (Conceptual Foundations)
Bias in AI isn't a bug that someone forgot to fix - it's a fundamental challenge baked into how these systems work. AI learns from data, and data reflects the world that produced it, including its inequalities, prejudices, and blind spots. A hiring model trained on a company's historical data will learn to favour candidates who resemble past successful hires - which, if the company has historically favoured certain demographics, means the AI will perpetuate that bias at scale and with an veneer of objectivity. Bias also enters through less obvious routes: which data gets collected, how problems are framed, what counts as a "good" outcome, and who's building the system. Even the choice of what to measure introduces bias. The uncomfortable truth is that there's no such thing as a perfectly unbiased AI system, just as there's no perfectly unbiased human. The goal is to understand where bias can enter, measure it where possible, and make conscious choices about acceptable trade-offs. Pretending AI is objective because it's mathematical is one of the most common and most dangerous misconceptions in the field.