Data Poisoning

Malicious actors corrupt the data an AI system is trained or fine-tuned on, causing it to produce systematically biased, harmful, or adversary-controlled outputs at scale.
Share this failure mode:

What it means

Malicious actors corrupt the data an AI system is trained or fine-tuned on, causing it to produce systematically biased, harmful, or adversary-controlled outputs at scale.

Why it matters

A poisoned model may behave normally in routine use and fail in targeted or high-stakes scenarios. For organisations using third-party AI tools, the risk is invisible; you cannot inspect what the training data contained or whether it has been compromised.

Board governance implications

Supply chain due diligence must include questions about training data provenance and security. In a crisis, “We did not know what the tool was trained on” is not a defence. It is evidence of absent governance.

Governance failure timeline

Pre-deployment


Failure to conduct supply chain due diligence on training data provenance and security before approving any third-party AI tool for organisational use.

Absence of contractual obligations on vendors to disclose data integrity incidents.

Deployment


Harmful or adversary-controlled outputs enter live use. The system behaves normally in routine operation and fails in targeted or high-stakes scenarios.

Precisely the conditions where reliability matters most, and when the conditions are least likely to trigger a manual review.

Post-deployment


Discovery of systemic compromise produces reputational and regulatory exposure simultaneously.

Forensic investigation will establish what supply chain controls were in place at procurement.

The absence of those controls is the organisational liability.

other Failure Modes