For every human identity in your enterprise, there are 144 machine identities: API keys, service accounts, OAuth tokens, CI/CD secrets, and now AI agent credentials. GitGuardian just raised $50 million in Series C funding specifically to secure this exploding attack surface. The round, led by Insight Partners with participation from Quadrille Capital and existing backers Balderton, Eurazeo, and Sapphire Ventures, brings the company’s total funding to $106 million. Their thesis is blunt: AI agents are creating non-human identities faster than any security team can track, and the tooling to manage them does not exist yet.

This is not a speculative play. GitGuardian’s State of Secrets Sprawl 2025 report found 23.8 million new secrets leaked on public GitHub repositories in 2024 alone, a 25% year-over-year increase. Worse, 70% of secrets leaked two years ago are still active today. Now multiply that problem by thousands of autonomous AI agents, each needing its own credentials, and you start to see why investors wrote a $50 million check.

Related: AI Agent Identity: Why Every Agent Needs IAM Before Touching Production

The Non-Human Identity Crisis in Numbers

The ratio of machine identities to human identities has been climbing for years. But the AI agent era turned a linear trend into an exponential one.

Research from the NHI Management Group found a 44% growth in non-human identities between 2024 and 2025, pushing the average enterprise ratio from 92:1 to 144:1. ManageEngine’s 2026 Identity Security Outlook puts some organizations at 500:1. The average enterprise now manages roughly 250,000 machine identities, up from 50,000 in 2021.

Each of those identities represents a potential breach vector. 68% of IT security incidents already involve machine identities. And the security industry is converging on a clear consensus for 2026: machine identities will become the primary breach vector in cloud environments.

Why AI Agents Make It Worse

Traditional non-human identities (service accounts, CI/CD tokens, database credentials) are at least predictable. They have known owners, defined purposes, and relatively stable access patterns. AI agents break all three assumptions.

A coding assistant might need access to repositories, package registries, and deployment pipelines. A customer service agent touches CRM data, knowledge bases, payment systems, and email. A data analysis agent reads from data warehouses, writes to dashboards, and sometimes creates new database views on the fly. Each capability requires credentials, and each credential is a secret that needs rotation, monitoring, and lifecycle management.

GitGuardian CEO Eric Fourrier captured the shift precisely: “AI isn’t creating new problems, but exposing the ones we’ve been ignoring for years, and accelerating them” to a point where traditional defenses fail. Organizations that once managed hundreds of service accounts now face thousands of autonomous agents, each requiring secure credential management.

The 70% Problem

The most alarming finding from GitGuardian’s secrets sprawl research is not how many secrets leak. It is how few get revoked. 70% of secrets leaked in 2022 were still active when tested in 2024. That means the attack surface only grows. Every new leak adds to the pile, and almost nothing gets cleaned up.

For AI agents, this problem compounds. When a human developer rotates credentials, they update their local environment. When an agent’s credentials leak, the agent might have already embedded them across multiple workflows, sub-agent configurations, and tool connection strings. Revoking a single leaked agent credential can break an entire automated pipeline, which is exactly why teams keep postponing rotation.

What GitGuardian Is Building

GitGuardian started as a secrets detection platform, scanning code repositories for hardcoded credentials. The company currently monitors over 610,000 repositories for more than 115,000 active developers, with customers including DigitalOcean, Snowflake, Datadog, BASF, and Euronext.

The $50 million Series C expands that focus in three directions.

AI Agent Credential Security

The platform will detect, monitor, and govern credentials used by AI systems, from coding assistants and customer service bots to autonomous data pipelines. This includes scanning agent configuration files, monitoring secrets embedded in agent tool definitions (including MCP server configurations), and tracking credential usage patterns that indicate over-privileged agents.

The practical challenge here is that AI agent credentials look different from traditional service account keys. An agent might authenticate through OAuth flows, embed API keys in prompt context, or receive credentials dynamically through tool-use protocols. Each pattern requires different detection logic.

Enterprise NHI Lifecycle Management

For enterprises managing tens of thousands of non-human identities, GitGuardian plans automated discovery, usage analytics, rotation policies, and compliance reporting across the entire development ecosystem. The goal is full lifecycle management: discover every NHI, track what it accesses, flag when it becomes stale or over-privileged, and automate rotation before credentials become a liability.

This directly addresses what the CSA/Strata survey found: 44% of organizations still authenticate their AI agents with static API keys. Static keys are the machine identity equivalent of writing your password on a sticky note. They do not expire, they are easy to copy, and they are the first thing attackers look for after gaining initial access.

Related: Zero Trust for AI Agents: Why 'Never Trust, Always Verify' Needs a Rewrite

Geographic Expansion into DACH and Beyond

80% of GitGuardian’s new revenue in 2025 came from North America. The Series C funds a push into DACH (Germany, Austria, Switzerland), UK, Nordics, APAC, South America, and the Middle East. For European enterprises, this matters because the EU AI Act’s August 2026 enforcement deadline requires documented security controls for high-risk AI systems, and NHI governance is part of that story.

The Competitive Landscape

GitGuardian is not the only company seeing this opportunity. The NHI security market is filling up fast.

Oasis Security focuses on discovering and governing machine identities across cloud environments. Astrix Security (acquired by Palo Alto Networks) targets third-party app-to-app connections and service account oversight. Permiso specifically monitors cloud identity behavior and detects identity-based attacks.

What differentiates GitGuardian is its origin in secrets detection. They have been scanning code for leaked credentials since 2017. That gives them a dataset of 23.8 million leaked secrets per year and the detection heuristics to find credentials in contexts other tools miss: Docker images (they found 100,000 valid secrets across 15 million public Docker images), Slack messages, Jira tickets, and Confluence pages.

The question is whether secrets detection is the right entry point for NHI lifecycle management. Detecting leaked credentials is the emergency room of security. Lifecycle management is preventive care. GitGuardian is betting they can do both.

What This Means for AI Agent Builders

If you are building or deploying AI agents, the GitGuardian funding signals something you should already be planning for: agent credentials are becoming a first-class security concern.

Short-term (now): Audit how your agents authenticate. If any agent uses hardcoded API keys, static tokens, or credentials embedded in configuration files, replace them with short-lived, scoped tokens. Use OAuth 2.0 client credentials flow where possible. Store secrets in a vault (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault), not in code.

Medium-term (Q2 2026): Implement NHI lifecycle management. Every agent credential should have an owner, an expiration policy, and usage monitoring. When an agent is decommissioned, its credentials should be automatically revoked. This is not optional for EU-based companies facing the August 2026 AI Act deadline.

Long-term: Expect agent identity to become a compliance requirement, not just a security best practice. The Singapore Agentic AI Governance Framework already calls for agent-specific identity controls. European regulators are watching closely.

The $50 million question is whether the industry will build proper NHI infrastructure before the next major breach traces back to an over-privileged AI agent with a two-year-old leaked API key. GitGuardian is betting the answer is yes, and that they will be the ones providing it.

Frequently Asked Questions

What are non-human identities (NHIs) and why do they matter for AI agents?

Non-human identities are the credentials and authentication tokens used by machines, services, and AI agents to access systems. This includes API keys, service accounts, OAuth tokens, and CI/CD secrets. They matter because NHIs now outnumber human identities 144:1 in the average enterprise, and 68% of security incidents already involve machine identities. AI agents are creating thousands of new NHIs that most security teams cannot track.

Why did GitGuardian raise $50 million for non-human identity security?

GitGuardian raised $50M in Series C funding because AI agents are creating non-human identities faster than security teams can manage them. Their research found 23.8 million secrets leaked on GitHub in 2024 (up 25% YoY), and 70% of leaked secrets remain active after two years. The funding targets AI agent credential security, enterprise NHI lifecycle management, and geographic expansion into DACH and other markets.

How many non-human identities does the average enterprise have?

The average enterprise manages roughly 250,000 machine identities, up from 50,000 in 2021. The ratio of machine to human identities is 144:1 on average, with some organizations reporting ratios as high as 500:1. NHI growth accelerated 44% between 2024 and 2025, driven primarily by cloud adoption, automation pipelines, and the proliferation of AI agents.

What should organizations do to secure AI agent credentials?

Organizations should immediately audit how agents authenticate and replace any static API keys or hardcoded credentials with short-lived, scoped tokens using OAuth 2.0 client credentials flow. Store secrets in dedicated vaults like HashiCorp Vault or AWS Secrets Manager. Implement NHI lifecycle management with automatic discovery, rotation policies, and usage monitoring. Every agent credential should have a defined owner, an expiration policy, and automatic revocation when the agent is decommissioned.

How does AI agent security relate to EU AI Act compliance?

The EU AI Act’s August 2026 enforcement deadline requires documented security controls for high-risk AI systems. NHI governance and agent credential management are part of this compliance picture. Organizations deploying AI agents in the EU must demonstrate proper identity management, access controls, and audit trails for their autonomous systems. The Singapore Agentic AI Governance Framework already calls for agent-specific identity controls, and European regulators are expected to follow suit.