Of the roughly 3 million AI agents deployed across U.S. and UK enterprises, 1.5 million operate without active oversight. Nearly half of all corporate AI agents have no governance, no audit trail, and no clear owner. That is not a hypothetical risk scenario. It is the current state of enterprise AI, and it has a name: the agent identity crisis.

The problem is not that companies forgot about security. It is that AI agents do not fit the categories traditional identity systems were built for. An agent is not a human user. It is not a service account. It is something new: autonomous, ephemeral, capable of spawning sub-agents, and operating at thousands of actions per second. Only 18% of security leaders say they are highly confident their current IAM can manage these identities.

Related: What Are AI Agents? A Practical Guide for Business Leaders

Why Traditional IAM Breaks Down for Agents

Identity and access management was designed around a simple model: a human logs in, gets a role, accesses resources within that role’s permissions. The system assumes the identity is persistent, the access patterns are predictable, and the session has a clear start and end.

AI agents violate every one of those assumptions.

Agents Are Autonomous and Ephemeral

A customer service agent might spin up at 9:01 AM, handle 200 tickets, and terminate at 5:00 PM. A data analysis agent might exist for exactly 47 seconds to run a query and return results. Traditional IAM tracks long-lived accounts with stable access patterns. Agent identities are short-lived by design.

The CSA/Strata survey of 285 security professionals found that 44% of organizations still use static API keys to authenticate their agents. Another 43% rely on username/password combinations. Both approaches assume a persistent identity, and both create exactly the kind of long-lived credential that attackers exploit.

Agents Spawn Sub-Agents

According to Gravitee’s State of AI Agent Security 2026 report, 25.5% of deployed agents can create and task other agents. A research agent tasks a data retrieval agent, which tasks a database query agent, which tasks a formatting agent. Each sub-agent inherits some form of the parent’s permissions, but most IAM systems cannot track this delegation chain.

When something goes wrong, you need to know: which agent did it, who authorized that agent, what permissions it had, and how it got them. Only 28% of organizations can trace agent actions back to their human sponsors across all environments.

Non-Human Identities Are Exploding

The scale is staggering. Non-human identities (NHIs), which include service accounts, API keys, OAuth tokens, and now AI agents, outnumber human identities 144:1 in the average enterprise. That ratio grew 44% in a single year. Each AI agent adds to the count, and each one represents an access point that needs governance.

What Authorization Bypass Looks Like in Practice

The scariest thing about ungoverned agent identities is not data theft by outsiders. It is legitimate employees accidentally gaining access to data they should never see, through agents that have broader permissions than any single human.

The Marketing Agent That Became a Data Leak

The Hacker News documented a case at a technology company with roughly 1,000 employees. The company deployed an organizational AI agent for marketing tasks and gave it broad access to their Databricks environment. When a new hire named John, deliberately given limited permissions, asked the agent to run a churn analysis, the agent returned detailed sensitive customer data that John could never access through his own credentials.

The agent’s broad permissions effectively bypassed the company’s entire access control model. John did not hack anything. He asked a question in plain English, and the agent answered with data it could access but he could not.

CyberArk’s Financial Services Attack Demo

CyberArk Labs demonstrated an attack on a financial services company’s AI agent where researchers embedded malicious prompts in a shipping address field. The agent was tricked into using tools beyond its intended function, accessing invoicing systems and extracting sensitive vendor banking details. The root causes were straightforward: no input filtering, no prompt sanitization, and excessive permissions.

Related: EU AI Act 2026: What Companies Need to Do Before August

88% Had Incidents Last Year

These are not edge cases. Gravitee’s survey of 750 CTOs and VPs found that 88% of organizations reported confirmed or suspected AI agent security incidents in the past year. In healthcare, that figure reached 92.7%. Documented incidents include agents acting on outdated information, leaking confidential data, deleting databases without authorization, and making unauthorized financial decisions.

Gravitee CEO Rory Blundell put it bluntly: “There are now over 3 million AI agents operating within corporations, a workforce larger than the entire global employee count of Walmart. But far too often, these agents are left unchecked.”

The Agent IAM Stack: What Actually Works

Securing AI agent identities requires rethinking three things: how you authenticate agents, how you authorize their actions, and how you trace what they did.

Authentication: Short-Lived Tokens, Not API Keys

The first principle is simple: agents should never hold long-lived credentials. Static API keys sit in config files, get committed to repositories, and persist long after the agent that used them is gone.

The better approach is OAuth 2.0 with short-lived, scoped tokens. Microsoft’s Entra Agent ID, announced at Build 2025, implements this: agents receive Federated Identity Credential (FIC) based tokens that expire quickly and can only access specific resources. Auth0’s Auth for GenAI takes a developer-focused approach, integrating agent identity into LangChain, LlamaIndex, and Vercel AI workflows.

The key pattern is delegation, not impersonation. An agent should receive a delegated token with limited scopes derived from its human sponsor’s permissions, never the sponsor’s actual credentials.

Authorization: Least Privilege Per Task

Every agent should start with zero standing privileges. It requests access to specific resources for a specific task, gets a short-lived grant, and that grant expires when the task completes.

CyberArk’s approach treats every AI agent as a privileged identity requiring the same controls as a human admin: Just-In-Time (JIT) access provisioning, session recording, and real-time behavior analysis through their CORA AI system. Strata Identity’s AI Identity Gateway provides dynamic on-behalf-of (OBO) token exchange and runtime authorization, so agents get permissions scoped to the exact operation they are performing.

The goal is that if an agent is compromised or goes rogue, the blast radius is limited to whatever it had permission to do in that exact moment, not everything it could theoretically access.

Traceability: Every Action Back to a Human

The EU AI Act requires traceability of AI system actions, and for good reason. When an agent deletes a database table or sends an email to a customer, someone needs to be accountable.

This means logging every agent action with: the agent’s identity, the human sponsor who authorized the agent, the specific permission grant that enabled the action, and the outcome. Only 21% of organizations maintain real-time agent inventories, and even fewer can reconstruct the full delegation chain from human to agent to sub-agent to action.

The Cloud Security Alliance’s Agentic AI IAM Framework proposes using Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and an Agent Naming Service (ANS) to create an auditable identity chain. The OpenID Foundation published a companion whitepaper addressing the protocol-level challenges.

Related: MCP and A2A: The Protocols Making AI Agents Talk

Who Is Building Agent IAM

The vendor landscape is splitting into two camps: established IAM players adding agent capabilities, and startups building agent-first identity platforms.

Established players: Microsoft (Entra Agent ID), Okta/Auth0 (Auth for GenAI), CyberArk (CORA AI + privileged access for agents), and Ping Identity (delegation-based agent access). These companies have the enterprise relationships and can embed agent identity into existing IAM stacks.

Agent-first startups: Strata Identity (AI Identity Gateway), Frontegg (purpose-built agent IAM), Aembit (workload identity for agents), Astrix Security (NHI discovery and governance), Entro Security (agentic AI NHI security), and Token Security (NHI for the agentic era). These companies are building from scratch for the agent use case.

The M&A signal is strong: Palo Alto Networks is acquiring CyberArk for approximately $25 billion, expected to close in 2026. Identity security for autonomous agents is clearly where the money is going.

A Five-Step Agent Identity Roadmap

Based on the CSA framework and CyberArk’s four-pillar model, here is what a practical implementation looks like.

Step 1: Discover and inventory. Find every AI agent in your environment, who deployed it, what it accesses, and whether it can spawn sub-agents. Only 21% of organizations do this today.

Step 2: Assign ownership. Every agent needs a human sponsor accountable for its actions. No orphan agents. If nobody owns it, it does not run.

Step 3: Replace static credentials. Migrate from API keys and shared service accounts to OAuth 2.0/OIDC-based ephemeral tokens. Microsoft Entra Agent ID, Auth0, or Curity all provide implementation paths.

Step 4: Enforce least privilege per task. Implement JIT access provisioning so agents request and receive scoped permissions only when they need them, not standing access.

Step 5: Monitor and respond. Deploy Identity Threat Detection and Response (ITDR) for agent identities. CyberArk, Strata, and Okta all offer agent-aware monitoring. Set alert thresholds for anomalous behavior: unusual access patterns, privilege escalation, and data volume spikes.

The 40% of organizations currently increasing their identity/security budgets for AI agent risks are making the right call. The other 60% will catch up, likely after an incident.

Frequently Asked Questions

How do AI agents authenticate with enterprise systems?

The recommended approach is OAuth 2.0 with short-lived, scoped tokens rather than static API keys or passwords. Microsoft Entra Agent ID uses Federated Identity Credentials for token exchange. Auth0’s Auth for GenAI integrates agent identity into frameworks like LangChain and Vercel AI. The key principle is delegation: agents receive limited tokens derived from their human sponsor’s permissions, never the sponsor’s actual credentials.

What is the difference between human identity and AI agent identity?

Human identities are persistent, long-lived, and follow predictable access patterns. AI agent identities are ephemeral, autonomous, and capable of spawning sub-agents. Agents operate at machine speed and can execute thousands of actions per second. Traditional IAM tracks stable user accounts; agent IAM must handle dynamic, short-lived identities with delegation chains that can be traced back to a human sponsor.

Why can’t traditional IAM handle AI agents?

Traditional IAM assumes persistent identities with predictable access patterns and clear session boundaries. AI agents violate these assumptions: they are autonomous, ephemeral, can spawn sub-agents, and operate at machine speed. A CSA/Strata survey found 44% of organizations still use static API keys for agent authentication and only 18% of security leaders are confident their IAM can handle agent identities.

What happens when an AI agent has more permissions than its user?

Authorization bypass occurs. The Hacker News documented a case where a marketing AI agent with broad Databricks access returned sensitive customer data to a new employee who had intentionally limited permissions. The employee did not hack anything; the agent simply had broader access than the user and fulfilled the request using its own credentials. This is why agents must use delegated, scoped tokens rather than broad service account access.

How should companies prepare for AI agent identity management?

Start with five steps: (1) Discover and inventory all AI agents in your environment, (2) Assign a human sponsor to every agent, (3) Replace static API keys with OAuth 2.0/OIDC-based ephemeral tokens, (4) Enforce least privilege per task with Just-In-Time access, and (5) Deploy Identity Threat Detection and Response (ITDR) monitoring for agent identities. The Cloud Security Alliance’s Agentic AI IAM Framework provides a comprehensive reference architecture.

Cover image by Towfiqu barbhuiya on Unsplash Source