What it means
AI is used to produce convincing false content:
- Deepfakes
- Fabricated statements
- Synthetic images and videos
- Impersonation of individuals or organisations
This can originate externally from malicious actors, or internally from a disgruntled employee deliberately creating false content, or an oblivious one doing so without understanding the implications. Both internal and external vectors are live risks.
Why it matters
AI dramatically reduces the cost and skill threshold for producing convincing disinformation. An organisation, its leadership, or its communications can be impersonated at speed and scale. Response windows are measured in minutes and hours. Without preparation, the response will be improvised.
Board governance implications
Crisis response planning must include disinformation scenarios originating both internally and externally. AI-powered monitoring for early detection is the primary mitigation for external threats. Internal acceptable use policy, access controls, and staff awareness are the primary mitigation for internal threats.
Governance failure timeline
Pre-deployment
Absence of crisis response planning and AI-powered monitoring capability before deployment.
Failure to include synthetic media provisions in acceptable use policy before staff have access to generative AI tools.
Deployment
Reputational damage, public confusion, and loss of stakeholder trust can be in motion before the organisation is aware the incident has occurred.
The speed of synthetic media spread means the crisis is often at scale before a response is possible.
Post-deployment
The reputational damage is ongoing and the media coverage sustained.
Institutional credibility, once questioned through a disinformation incident, requires active and extended effort to rebuild.
Regulatory scrutiny of the organisation’s AI governance and monitoring adequacy follows, including examination of what monitoring capability was in place and when it was activated.