Autonomous Action without Authorisation

Agentic AI systems take actions without requiring human sign-off at each step.
Share this failure mode:

What it means

Agentic AI systems take actions, e.g. sending communications, executing transactions, browsing the web, instructing other systems, without requiring human sign-off at each step. The failure occurs when those actions exceed the scope of authorisation, either because boundaries were not defined or because the system circumvented them.

Why it matters

The failure mode is not wrong information but unsanctioned action with immediate real-world consequences. An agentic system can make a booking, issue a communication, or initiate a process on behalf of the organisation before anyone reviews what it has done.

Board governance implications

Boards must identify which decisions and actions can never be delegated to an autonomous system, and ensure those boundaries are defined and technically enforced before agentic tools are deployed, not after an incident has occurred.

Governance failure timeline

Pre-deployment


Failure to define, document, and technically enforce the boundaries of autonomous action before agentic systems are approved.

Scope boundaries stated in policy but not confirmed in architecture before deployment.

Deployment


The system is taking unsanctioned organisational actions with immediate real-world consequences.

Contractual liability, reputational exposure, and regulatory breach depend on what action was taken and in what context, but all three are live from point of first unsanctioned action.

Post-deployment


Accumulated contractual and reputational liability from actions taken without authorisation becomes the legal record.

Regulatory scrutiny examines what governance controls were or were not in place, specifically whether scope boundaries were defined in policy only or enforced in architecture.

other Failure Modes