What it means
AI systems are built on stacked components:
- foundation models
- datasets
- plugins
- APIs
- RAG data sources
- third-party integrations
Each can be compromised independently. A poisoned base model, malicious plugin, compromised data source, or backdoored dependency can introduce vulnerabilities invisible to the deploying organisation.
Why it matters
Organisations cannot inspect what goes into third-party AI components. A compromised component can cause the AI system to behave normally in routine use and fail in targeted scenarios. The board carries accountability for the outputs of deployed systems, including where failure originated upstream.
Board governance implications
Supply chain due diligence must include an AI Bill of Materials (AI-BOM) documenting every model, dataset, plugin, and API dependency. Vendors must be assessed for security practices, data provenance, and incident notification obligations. Component integrity verification is a procurement requirement.
Governance failure timeline
Pre-deployment
Failure to document every model, dataset, plugin, and API dependency before deployment.
Absence of vendor security assessment, data provenance requirements, and incident notification obligations in procurement and contracting.
Deployment
Compromised AI behaviour is active at point of use. The system performs normally in routine scenarios and fails in targeted ones, with no visible signal to the organisation that anything is wrong.
Post-deployment
Data breach, reputational exposure, and supply chain liability arrive together.
Forensic investigation establishes what due diligence was conducted at procurement.
The absence of an AI Bill of Materials and vendor security assessment is the documented governance failure.