Consistency, Reproducibility & Drift
Traditional software gives the same output every time you provide the same input. AI models don't - and this creates challenges that many organisations underestimate. Run the same prompt through a language model twice and you may get different answers. This non-determinism is partly by design (controlled randomness makes outputs more natural) and partly inherent to the architecture. Temperature settings control the degree of randomness, and setting temperature to zero reduces but may not eliminate variation across different hardware or software versions. Reproducibility challenges extend beyond single queries. When your AI provider updates their model - which they regularly do, often without detailed notice - your carefully tuned prompts might suddenly produce different results. This is model drift: gradual or sudden changes in behaviour that affect your application without any change on your side. Workflow drift occurs when the accumulation of small behavioural changes over time shifts your system's overall performance in ways that are hard to detect without systematic monitoring. For businesses, these issues demand treating AI outputs as inherently variable and building systems accordingly. Version-pin your model when possible. Implement automated testing suites that catch behavioural changes. Monitor output quality continuously, not just at launch. And design user-facing experiences that accommodate variation rather than assuming exact repeatability.