Model Collapse

AI models trained on AI-generated data progressively degrade.
Share this failure mode:

What it means

AI models trained on AI-generated data progressively degrade; outputs become homogenised, diversity of response narrows, and the model loses the ability to represent edge cases or minority perspectives. As AI-generated content proliferates online, models retrained on that content absorb the distortions.

Why it matters

A system that was performing correctly at procurement may not be after retraining cycles. For organisations using third-party models the risk is invisible. For those building or fine-tuning models, failure to account for data provenance in retraining is a governance failure.

Board governance implications

The board must ask whether AI tools in use are subject to retraining cycles, what data is used for retraining, and whether the organisation has access to performance benchmarks that would identify degradation over time.

Governance failure timeline

Pre-deployment


Failure to establish whether AI tools in use are subject to retraining cycles, what data is used for retraining, and whether performance benchmarks exist that would identify output degradation over time.

Absence of data provenance requirements in procurement.

Deployment


Homogenised and degraded outputs begin entering active use.

Failure to surface minority perspectives or edge cases in live analysis.

Decisions are progressively shaped by a narrowing AI perspective, and because there is no failure signal, no one inside the organisation knows it is happening.

Post-deployment


Output quality has degraded to the point where it is affecting the quality of decisions, analysis, and strategic thinking.

It’s discovered when something goes wrong, not before.

other Failure Modes