Shadow AI

Staff using AI tools outside approved policy (if one exists) and without disclosure to the organisation.
Share this failure mode:

What it means

Staff using AI tools outside approved policy, typically free-tier consumer tools, without disclosure to IT, legal, or leadership. The governance failure originates pre-deployment: the absence of an acceptable use policy before staff began using AI independently.

Why it matters

Data submitted to unapproved tools may be used for model training, shared with third parties, or stored in jurisdictions outside the organisation’s compliance framework. In most organisations, this is already happening.

Board governance implications

Shadow AI is the default state, not the exception. An acceptable use policy with a named owner is the minimum control. Without it, the organisation carries liability for use it cannot see.

Governance failure timeline

Pre-deployment


Absence of an acceptable use policy and a named accountable owner before staff began independently adopting AI tools.

Failure to establish minimum governance controls before AI use became the organisational default.

Deployment


Ungoverned AI use is ongoing across the organisation.

Organisational data is being submitted to unapproved tools outside any compliance framework and the liability is accumulating invisibly.

The organisation cannot see it, cannot quantify it, and cannot stop it without a policy that does not exist.

Post-deployment


Data breach, GDPR exposure, and reputational damage arrive when undisclosed use becomes public.

Regulatory investigation follows.

The organisation’s position, that it was unaware, whilst true is also evidence of the governance failure.

other Failure Modes