Prohibited AI Practices and Unlawful Deployment

The EU AI Act Article 5 prohibits eight categories of AI application outright.
Share this failure mode:

What it means

The EU AI Act Article 5 prohibits eight categories of AI application outright:

  1. subliminal manipulation to distort behaviour;
  2. exploitation of vulnerabilities based on age, disability, or socioeconomic circumstances;
  3. biometric categorisation to infer sensitive attributes;
  4. social scoring;
  5. predictive criminal profiling;
  6. mass facial recognition database scraping;
  7. real-time biometric identification in public spaces; and
  8. emotion inference in workplace or educational settings.

The failure is deploying AI in prohibited categories, whether knowingly or through failure to assess.

Why it matters

Prohibited AI practices are not a compliance grey area. Organisations deploying them face penalties of up to 7% of global annual turnover, plus reputational consequences of being identified as deploying banned AI. Ignorance of prohibition is not a defence.

Board governance implications

Before approving any AI system involving biometrics, behavioural profiling, social scoring, or emotion recognition, the board must confirm the system has been assessed against EU AI Act Article 5, regardless of whether the organisation considers itself primarily EU-facing.

Governance failure timeline

Pre-deployment


Failure to assess any AI system involving biometrics, behavioural profiling, social scoring, or emotion recognition against EU AI Act Article 5 before the deployment decision is made.

Absence of a structured prohibited-practices review as part of procurement governance.

Deployment


Prohibited systems are operating in active breach of EU AI Act Article 5.

Penalties begin accruing from point of use.

Biometric categorisation, emotion recognition, or social scoring operating without lawful basis carries the highest penalty tier in the Act.

Post-deployment


Sustained regulatory investigation, penalties up to 7% of global annual turnover, reputational collapse, and mandatory system withdrawal and remediation are the consequences.

Being publicly identified as deploying banned AI is unlikely to be a recoverable communications position.

other Failure Modes