What it means
AI models can inadvertently reproduce sensitive information from their training data or processing history when queried in certain ways. For example:
- personal data
- confidential business information
- credentials, or
- proprietary content
This is distinct from data poisoning (malicious input) and rights violations (procurement failure). The failure is the model outputting data it absorbed during training or processing.
Why it matters
Organisations using third-party AI tools have no visibility into what sensitive information those models may have absorbed. Staff submitting organisational data to AI tools may cause that data to be retained, shared across users, or reproduced in other sessions. The exposure falls on the organisation.
Board governance implications
The board must confirm that any AI tool used with sensitive, personal, or confidential data operates within a closed environment that does not share data across users or use submitted data for model training. Consumer-grade tools used for business purposes are the primary risk vector.
Governance failure timeline
Pre-deployment
Failure to confirm that any AI tool used with sensitive, personal, or confidential data operates in a closed environment, i.e. one that does not share data across users or use submitted data for model training, before permitting use.
Deployment
Organisational, client, or personal data is being exposed through model processing.
Staff are submitting sensitive data to tools that retain, share, or reproduce it across other users, and the organisation has no visibility into how much has already been exposed.
Post-deployment
GDPR enforcement, subject access requests, regulatory investigation, and litigation follow.
The loss of client trust, once the exposure becomes known, is the consequence that is hardest to quantify and longest to recover from.