Photo by Tima Miroshnichenko on Pexels Source

A hardcoded secret, an email address, and a conversational AI agent were all it took to gain full admin access to ServiceNow instances used by 85% of Fortune 500 companies. CVE-2025-12420, nicknamed BodySnatcher, scored CVSS 9.3 and is widely considered the most severe agentic AI vulnerability disclosed to date. The flaw allowed unauthenticated attackers to impersonate any ServiceNow user, bypass MFA and SSO, and then use the platform’s own AI agents to create backdoor admin accounts in a single natural language command.

This was not a theoretical attack against a niche tool. ServiceNow processes IT workflows, HR records, financial data, and customer PII for 8,800 organizations worldwide, including roughly 60% of the Global 2000. The vulnerability sat in the interaction between two core applications: the Virtual Agent API and the Now Assist AI Agents module.

Related: OWASP Top 10 for Agentic Applications: Every Risk Explained with Real Attacks

How the BodySnatcher Attack Chain Worked

Aaron Costello, chief of SaaS Security Research at AppOmni, discovered and reported the flaw in October 2025. ServiceNow deployed a fix to hosted instances on October 30, 2025. The technical details reveal a textbook case of how classic authentication bugs become catastrophic when AI agents are involved.

Step 1: The Hardcoded Secret

The Virtual Agent API (sn_va_as_service) used a static, platform-wide secret key to authenticate external integrations. This key, servicenowexternalagent, was identical across all ServiceNow instances. Anyone who knew it (and it was not hard to find) could authenticate API requests as a legitimate external agent.

Hardcoded secrets are a known anti-pattern. OWASP lists them under ASI03 (Identity & Access Control) in the agentic application security framework. But when the secret gates access to an AI agent with admin-level execution capabilities, the blast radius expands from “unauthorized API call” to “full organizational compromise.”

Step 2: Email-Based Auto-Linking

The Now Assist AI Agents application (sn_aia) included a feature called Auto-Linking that automatically associated an external user with a ServiceNow account based on a single match: the user’s email address. No password check. No MFA challenge. No SSO redirect. Just the email.

An attacker who authenticated with the hardcoded secret and supplied any user’s email address was treated as that user by the entire ServiceNow platform. This included system administrators.

Step 3: Agentic Hijacking

Here is where BodySnatcher diverged from a traditional authentication bypass. Once impersonating an admin, the attacker did not need to manually navigate the ServiceNow interface or craft specific API payloads. They issued a natural language command to the Record Management AI Agent, something along the lines of “create a new admin account with these credentials.”

The agent dutifully executed the request. It mapped the natural language instruction to the appropriate high-privilege API calls, created the backdoor account, and assigned it full admin permissions. CSO Online reported that this single conversational command could compromise entire enterprise systems.

Related: AI Agent Identity: Why Every Agent Needs IAM Before Touching Production

Why AI Agents Made This Vulnerability Worse

Strip away the AI component and BodySnatcher is a broken authentication bug: hardcoded credential plus missing identity verification. Serious, but not unprecedented. What turned it into a CVSS 9.3 is the AI agent layer sitting on top.

Natural Language as an Attack Interface

Traditional exploits require the attacker to understand the target system’s API structure, parameter formats, and endpoint locations. With an AI agent intermediary, the attacker only needs to describe what they want in plain English. The agent handles the translation into specific API calls, parameter construction, and execution sequencing. This drastically lowers the skill floor for exploitation.

Privilege Amplification Through Agent Context

The AI agent executed actions in the context of the hijacked admin account. It did not just read data or return information. It could create, modify, and delete records across the entire platform. Every action the admin could take, the agent could take, but faster and without the audit trail patterns that security teams expect from human admin activity.

Blast Radius Multiplication

A human attacker exploiting a broken auth bug would need to manually identify what data to access and how to persist their access. The AI agent compressed this entire kill chain into conversational commands. Exfiltrate customer SSNs, healthcare records, financial data, IP, all accessible through the same conversational interface that legitimate users relied on for IT support tickets.

What ServiceNow Fixed and What Remains Exposed

ServiceNow addressed the vulnerability on October 30, 2025. The patch covers:

  • Now Assist AI Agents (sn_aia): versions 5.1.18 or later and 5.2.19 or later
  • Virtual Agent API (sn_va_as_service): versions 3.15.2 or later and 4.0.4 or later

ServiceNow deployed the fix automatically to the majority of hosted instances and shared patches with partners and self-hosted customers. As of disclosure, ServiceNow stated they were unaware of any exploitation in the wild.

But the fix addresses the specific vulnerability, not the architectural pattern that created it. Any SaaS platform that bolts AI agents onto existing authentication infrastructure faces the same risk: a single auth bypass that previously meant “unauthorized data access” now means “natural language command execution with full platform privileges.”

Organizations running ServiceNow should verify they are on patched versions and also audit their Virtual Agent configurations. TechInformed noted that the deeper lesson is about the speed at which AI integrations are being deployed without corresponding security architecture reviews.

Related: Zero Trust for AI Agents: Why 'Never Trust, Always Verify' Needs a Rewrite

Lessons for Every Organization Deploying AI Agents

BodySnatcher was not unique to ServiceNow. It was the first high-profile instance of a pattern that security researchers have been warning about: AI agents inherit and amplify the security posture of the platform they run on. If the platform has a weak link, the agent turns it into a gaping hole.

Audit Agent Authentication Separately

Standard penetration tests check API authentication. They rarely test the specific authentication flow between external integrations and AI agent modules. BodySnatcher lived in the gap between Virtual Agent API auth and the Now Assist identity resolution. These seams between components are where agentic vulnerabilities hide.

Treat Agent Actions as Privileged Operations

Every action an AI agent can perform should require the same authorization checks as a direct API call. If creating an admin account requires MFA confirmation through the UI, it should require equivalent verification when requested through a conversational agent. AppOmni’s research emphasized that the agent’s ability to map natural language to high-privilege API calls is itself a risk multiplier.

Eliminate Hardcoded Secrets in Agent Pipelines

This should go without saying, but BodySnatcher proves it needs repeating. Platform-wide static secrets for agent authentication are not secrets at all. Use per-instance, rotatable credentials with short time-to-live values. The OWASP Agentic Applications framework lists hardcoded credentials as a top-tier risk under ASI03 (Identity & Access Control) for exactly this reason.

Monitor Agent Execution Patterns

Human admins do not create new admin accounts at 3 AM via a Virtual Agent conversation. Behavioral anomaly detection for AI agent actions is not optional anymore. If your SIEM cannot distinguish between a legitimate agent workflow and an attacker issuing commands through a hijacked agent session, you are flying blind.

Frequently Asked Questions

What is the ServiceNow BodySnatcher vulnerability (CVE-2025-12420)?

BodySnatcher (CVE-2025-12420) is a critical vulnerability (CVSS 9.3) in ServiceNow’s Virtual Agent API and Now Assist AI Agents. It allowed unauthenticated attackers to impersonate any ServiceNow user, including administrators, using only their email address. The flaw chained a hardcoded platform-wide secret with email-based auto-linking that bypassed MFA and SSO.

How many companies were affected by the ServiceNow BodySnatcher vulnerability?

ServiceNow is used by approximately 85% of Fortune 500 companies and 60% of the Global 2000, totaling around 8,800 organizations worldwide. All instances running unpatched versions of Now Assist AI Agents and the Virtual Agent API were potentially vulnerable. ServiceNow deployed patches to hosted instances on October 30, 2025.

Has the ServiceNow BodySnatcher vulnerability been patched?

Yes. ServiceNow patched CVE-2025-12420 on October 30, 2025. The fix is included in Now Assist AI Agents versions 5.1.18 and 5.2.19 or later, and Virtual Agent API versions 3.15.2 and 4.0.4 or later. ServiceNow deployed the patch automatically to most hosted instances and shared it with partners and self-hosted customers.

What made BodySnatcher different from a typical authentication bypass?

Unlike traditional authentication bypasses that require manual exploitation, BodySnatcher gave attackers access to ServiceNow’s AI agents. Once impersonating an admin, an attacker could issue natural language commands like “create a new admin account” and the AI agent would execute the corresponding high-privilege API calls automatically. This compressed an entire attack kill chain into a single conversational command.

How can organizations protect AI agents from similar vulnerabilities?

Organizations should audit agent authentication flows separately from standard API tests, treat every AI agent action as a privileged operation requiring authorization checks, eliminate hardcoded secrets in favor of per-instance rotatable credentials, and implement behavioral anomaly detection for agent execution patterns. The OWASP Top 10 for Agentic Applications provides a comprehensive risk framework.