Three million AI agents are running inside U.S. and UK enterprises right now. Half of them have no active monitoring, no security review, and no clear owner. According to Gravitee’s State of AI Agent Security 2026 report, which surveyed 900+ executives and practitioners, 47% of deployed agents operate outside any governance framework. That is more AI agents than Walmart has employees, running with fewer controls than most companies apply to a shared printer.

This is not a theoretical risk. 88% of organizations have already experienced or suspected an AI agent security incident in the past year. The question is no longer whether ungoverned agents will cause problems. It is whether your organization will discover those problems before a regulator, a customer, or a breach notification letter does.

The Anatomy of Agent Sprawl

Agent sprawl happens the same way cloud sprawl did a decade ago: teams spin up resources faster than IT can track them. A marketing team connects an AI agent to their CRM. An engineering squad deploys a code review agent. Finance sets up an invoice-processing bot. Each one makes sense in isolation. None of them went through a centralized approval process, and nobody maintains a unified inventory.

The Gravitee survey found that only 14.4% of organizations have full security and IT approval for all AI agents going live. The rest operate in a gray zone, where agents are in production before anyone in security has reviewed their permissions, data access, or failure modes.

Why This Time Is Different

Cloud sprawl was bad. Agent sprawl is worse, for three reasons.

First, agents act autonomously. A rogue virtual machine sits idle until someone logs in. A rogue agent keeps executing tasks, accessing APIs, and making decisions on its own. As David Shipley, head of Beauceron Security, puts it: “100% of AI agents have the potential to go rogue.”

Second, agents spawn agents. Gravitee found that 25.5% of deployed agents can create and task other agents. One ungoverned agent does not stay one agent for long. It becomes a chain of ungoverned agents, each inheriting (or escalating) the permissions of its parent.

Third, traditional gatekeeping does not scale. Info-Tech Research Group projects more AI agents than human employees globally by 2028. Pre-approval workflows that take two weeks per agent cannot keep up when teams deploy 37 agents per business on average.

Related: What Are AI Agents? A Practical Guide for Business Leaders

The Confidence Gap: Executives Think They Are Covered

Here is the most dangerous finding from the Gravitee data: 82% of executives believe their existing policies protect against unauthorized agent actions. Meanwhile, on the ground, only 47% of agents are actually monitored.

This confidence gap is not just a measurement problem. It is a liability. When an incident happens, “we thought our policies were adequate” is not a defense. IDC predicts that up to 20% of the 1,000 largest companies will face lawsuits, fines, or CIO terminations by 2030 due to inadequate AI agent controls.

The Shadow AI Premium

The financial case against sprawl is concrete. IBM’s 2025 Data Breach Report found that breaches involving shadow AI cost organizations $670,000 more than average: $4.63 million versus $3.96 million for standard incidents. Shadow AI affects roughly one in five organizations.

Why the premium? Shadow AI incidents take longer to detect (because nobody is monitoring), involve more systems (because permissions were never scoped), and are harder to contain (because there is no inventory of what the agent could access).

Credential Hygiene Is Still 2015-Era

The Gravitee report reveals that 45.6% of organizations still rely on shared API keys for agent-to-agent authentication. Another 27.2% use custom, hardcoded authorization logic. Only 21.9% treat AI agents as independent, identity-bearing entities with their own credentials.

Shared API keys are the software equivalent of giving every new hire the same building key and hoping nobody makes a copy. When one agent is compromised, every agent sharing that key is compromised.

Related: AI Agent Identity: Why Every Agent Needs IAM Before Touching Production

What the EU AI Act Demands (And the Deadline Is August 2026)

The EU AI Act enters full force on August 2, 2026. Article 49 requires providers of high-risk AI systems to register them in the EU database before deployment. Deployers in the public sector must register as well.

This is, in practical terms, a mandatory AI system inventory requirement. Organizations that cannot answer “how many AI agents do we run and what do they do?” are already non-compliant by design.

The penalty structure is steep: up to EUR 35 million or 7% of global annual turnover for the most severe violations. Even mid-tier non-compliance fines reach EUR 15 million or 3% of turnover.

Beyond Registration: The Traceability Requirement

Registration is the floor, not the ceiling. The EU AI Act also requires risk management systems (Article 9), technical documentation (Article 11), and human oversight mechanisms (Article 14) for high-risk systems. An agent that processes job applications, evaluates creditworthiness, or interacts with critical infrastructure falls under these rules.

If you cannot trace what an agent did, why it did it, and who authorized it to act, you do not have compliance. You have a gap waiting to be audited.

Related: EU AI Act 2026: What Companies Need to Do Before August

Building an Agent Governance Framework That Scales

The governance tooling market is forming in real time. Organizations that treat “AI agent inventory” as seriously as they treated cloud asset management five years ago will be the ones that avoid the IDC’s predicted lawsuits.

The Agent Registry: Know What You Have

Every governance framework starts with a registry. You cannot govern what you cannot see. Microsoft’s Agent 365 provides a centralized agent registry through Microsoft Entra, including plans to surface shadow agents that were deployed without IT’s knowledge.

A minimal registry captures five attributes per agent: identity (who or what created it), purpose (what it does), permissions (what it can access), lineage (what other agents it can spawn), and owner (who is responsible when it breaks).

Policy-Driven Access Control

Static permission lists do not work for agents that take different actions depending on context. Gravitee’s AI Agent Management platform applies context-aware guardrails and policy-driven access control in real time, so an agent’s permissions can vary depending on the task, the data involved, and the risk level.

Google’s Vertex AI Agent Builder offers similar governance features, including enhanced tool governance that restricts which external tools an agent can invoke.

Lifecycle Management: Deploy, Monitor, Retire

Agents are not “deploy and forget” assets. Boomi’s AI Agent Governance framework outlines a lifecycle approach: provisioning (with approval), active monitoring (with alerting), periodic review (with re-certification), and retirement (with credential revocation).

The retirement step is often overlooked. Abandoned agents, still running with valid credentials but serving no current purpose, are the AI equivalent of a forgotten AWS instance with an S3 bucket left open. They accumulate risk silently.

A Five-Step Sprawl Containment Playbook

If your organization deploys more than a handful of AI agents, here is a practical starting point:

1. Inventory everything. Run a discovery sweep across your cloud environments, API gateways, and orchestration platforms. Tools like Microsoft Entra, Gravitee, and Apigee can surface agents you did not know existed. Tag each agent with an owner, a purpose, and an expiration date.

2. Kill shared credentials. Migrate every agent from shared API keys to short-lived, scoped tokens. OAuth 2.0 client credentials flow with per-agent client IDs is the minimum. This is not a 2027 project; it is a this-quarter project.

3. Enforce spawn controls. If an agent can create other agents, that capability needs explicit approval and hard limits. Set maximum spawn depth, require parent-agent lineage tracking, and ensure child agents inherit the parent’s governance policies (not just its permissions).

4. Automate compliance checks. Integrate your agent registry with your EU AI Act risk classification. High-risk agents trigger mandatory documentation, human oversight, and registration workflows. Low-risk agents get a lighter touch but still appear in the inventory.

5. Schedule retirement reviews. Every 90 days, review your agent inventory. Any agent that has not been actively used, updated, or re-certified gets its credentials revoked and its resources deallocated. No exceptions.

Shivanath Devinarayanan, Chief Digital Labor and Technology Officer at Asymbl, frames the urgency well: “All it takes is one question: ‘What are our AI agents actually doing?’ If the CIO can’t answer, they’re done.”

Frequently Asked Questions

What is AI agent sprawl?

AI agent sprawl refers to the uncontrolled proliferation of AI agents across an organization, deployed by different teams without centralized inventory, monitoring, or governance. According to Gravitee’s 2026 report, 47% of enterprise AI agents operate without active oversight, and the average business runs 37 agents. Sprawl creates security blind spots, compliance gaps, and accountability failures.

How many AI agents are operating without oversight?

Gravitee’s State of AI Agent Security 2026 report found approximately 1.5 million AI agents operating without active monitoring or security across U.S. and UK enterprises, out of a total of 3 million deployed agents. Only 14.4% of organizations reported full security and IT approval for all agents going live.

What tools help with AI agent governance?

Several platforms now offer AI agent governance capabilities. Microsoft Agent 365 provides centralized agent registries through Microsoft Entra. Gravitee offers policy-driven access control and real-time monitoring for AI agents. Google’s Vertex AI Agent Builder includes enhanced tool governance features. Boomi and AvePoint provide agent lifecycle management frameworks. The category is new but evolving quickly.

Does the EU AI Act require companies to track their AI agents?

Yes. Article 49 of the EU AI Act requires providers and certain deployers of high-risk AI systems to register them in the EU database before deployment. This effectively creates a mandatory AI system inventory requirement. The full enforcement deadline is August 2, 2026. Penalties for non-compliance reach up to EUR 35 million or 7% of global annual turnover.

What are the risks of ungoverned AI agents?

Ungoverned AI agents create multiple risk categories: unauthorized data access and exposure, compounding errors across multi-agent chains, compliance violations under the EU AI Act, and financial liability. IBM’s 2025 Data Breach Report found that breaches involving shadow AI cost $670,000 more than average. IDC predicts that 20% of the largest 1,000 companies will face lawsuits or CIO terminations by 2030 due to inadequate AI agent controls.

Cover image by Brett Sayles on Pexels Source