What it means
Generative AI predicts statistically likely outputs based on patterns in training data. It does not reason, verify, or know. The failure occurs when it is deployed in contexts that require those things: legal analysis, regulatory guidance, factual research, or professional advice. Different AI model types carry different capabilities and are not interchangeable.
Why it matters
A board cannot govern how AI works; it can govern what it is used for. Using a predictive system in a context that demands verified reasoning is a governance decision and the consequences fall on the organisation, not the tool.
Board governance implications
Before approving any AI use case, the board must ask whether the model type is appropriate for the context. Capability misalignment is an oversight failure, not a technical error.
Governance failure timeline
Pre-deployment
Failure to assess whether the model type is appropriate for the intended use case before approving deployment.
Absence of a capability-to-context review as part of AI procurement governance.
Deployment
Decisions, advice, and communications are being based on outputs a model was not designed to produce reliably.
Professional liability accumulates at point of use.
Regulatory exposure and reputational damage follow at point of discovery, which is rarely the organisation that finds it first.
Post-deployment
The accumulated liability from decisions and advice the model was not designed to support becomes the subject of regulatory scrutiny.
The governance process that approved the use case is examined.
The system must be remediated or replaced, and the organisation must account for what was produced in the interim.