Hallucination and Misinformation

AI fabricates facts, citations, figures, and statistics, presenting them indistinguishably from correct answers.
Share this failure mode:

What it means

AI fabricates facts, citations, figures, and statistics, presenting them indistinguishably from correct answers. There is no built-in reliability signal. This is a structural characteristic of how generative AI works (it is not a bug).

Why it matters

The absence of any error indicator means users cannot tell when AI is wrong. In professional, legal, or regulated contexts, the consequences of acting on a hallucinated output fall on the organisation.

Board governance implications

Human verification is not optional. Any workflow that routes AI output directly to a decision, publication, or client without human review has removed the only safeguard.

Governance failure timeline

Pre-deployment


Failure to establish mandatory human verification requirements before deploying generative AI in any context requiring factual accuracy.

Absence of documented workflow controls confirming AI output does not reach decisions, publications, or clients unreviewed.

Deployment


Decisions and communications are being based on fabricated outputs presented as fact.

Professional liability, client harm, regulatory exposure, and reputational damage arrive at point of discovery.

Because there is no error signal in the output, discovery is often external, a client, a regulator, a journalist, rather than internal.

Post-deployment


The accumulated liability from decisions and published communications built on fabricated outputs becomes the legal and regulatory record.

Professional indemnity claims follow.

The regulatory review focuses on what verification controls existed.

The reputational exposure is sustained because fabricated outputs, once identified publicly, are difficult to contextualise.

other Failure Modes