Black-box decision-making

Some AI systems cannot explain how they reached a conclusion.
Share this failure mode:

What it means

Some AI systems cannot explain how they reached a conclusion. The decision logic is opaque even to the system’s operators. Where this affects individuals, e.g. in hiring, lending, access to services, or communications, neither the organisation nor the individual can understand or challenge the reasoning.

Why it matters

The inability to explain an AI-assisted decision is both a governance failure and a legal exposure. Regulators, courts, and individuals have a legitimate expectation of explainability in high-impact decisions. The board should be able to draw the line between AI-assisted and AI-autonomous decisions.

Board governance implications

The board must confirm that for every high-impact use case, decision logic can be explained in plain language to those affected, to regulators, and to the media. If it cannot, that use case requires human decision-making, not AI.

Governance failure timeline

Pre-deployment


Failure to require explainability as a procurement criterion before deploying AI in any high-impact decision context

Absence of a confirmed standard for what explainable means in the organisation’s specific use cases.

Deployment


The organisation is unable to respond to regulatory challenge, subject access requests, or media scrutiny of AI-assisted decisions.

Legal exposure in high-impact decision contexts is live from point of use.

Post-deployment


Regulatory enforcement arrives.

Legal challenge to decisions that cannot be explained is sustained and difficult to defend.

Where systems cannot be made explainable, they must be remediated or replaced.

The organisation must account for every decision those systems informed while operating without explainability.

other Failure Modes