Model Drift

AI systems degrade over time as the world changes but the model does not.
Share this failure mode:

What it means

AI systems degrade over time as the world changes but the model does not. A system performing well at deployment may produce quietly worsening outputs months later, with no obvious failure signal. The governance failure is the deployment-stage decision not to build in monitoring.

Why it matters

Sign-off at deployment is not sufficient governance. A model that was safe and accurate at launch may become unreliable or biased over time without anyone noticing, until an output causes harm.

Board governance implications

Governance does not end at deployment. The board must ensure ethics-based audit cycles and performance monitoring are embedded in routine operations, not triggered only when something goes wrong.

Governance failure timeline

Pre-deployment


Failure to build performance monitoring, benchmarking, and ethics-based audit cycles into deployment sign-off conditions.

Absence of a structured review cadence as a requirement before any AI system is approved for live use.

Deployment


Degrading outputs are entering active use with no detection mechanism in place.

Decisions, communications, and recommendations are being shaped by quietly worsening AI performance and no one inside the organisation knows it is happening.

Post-deployment


Bias or inaccuracy is discovered only when an incident occurs, externally, under scrutiny, and without preparation.

The regulatory and reputational exposure that follows is harder to manage because the organisation cannot demonstrate when drift began or what governance was in place to detect it.

other Failure Modes