What it means
AI systems can violate individual rights in two distinct ways, through:
- Use of personal data without lawful basis, transparency, or consent, i.e. breaching data protection law; and
- Training on or generating copyrighted material without authorisation, i.e. creating intellectual property liability.
Both failures originate in data and design decisions made before deployment.
Why it matters
The board carries accountability for how the organisation uses personal data and third-party intellectual property in AI systems, including systems procured from third parties. Ignorance of what a tool was trained on, or what data it processes, is not a defence.
Board governance implications
Before deploying any AI system, the board must confirm lawful basis for personal data use, understand training data provenance of third-party tools, and assess IP exposure from generative AI outputs used in client-facing or published work.
Governance failure timeline
Pre-deployment
Failure to confirm lawful basis for personal data use, assess training data provenance of third-party tools for IP exposure, and evaluate intellectual property risk from AI-generated outputs before approving deployment in client-facing or published work.
Deployment
Personal data is being processed without lawful basis and IP-infringing content is being generated in client-facing or published work, both ongoing from point of use.
Post-deployment
Regulatory investigation, subject access requests, GDPR enforcement, copyright litigation, loss of client trust, and reputational exposure.
The consequences arrive across multiple fronts simultaneously:
- regulatory investigation
- subject access requests
- GDPR enforcement
- copyright litigation
- loss of client trust
Each carrying its own timeline and its own reputational weight.