MITRE ATLAS now includes 14 agentic AI attack techniques contributed by Zenity Labs in its first 2026 update. These are not theoretical risks. They cover specific, documented attack patterns: Agent Context Poisoning (AML.T0080), Exfiltration via Tool Invocation (AML.T0086), and Data Destruction via Agent Tools (AML.T0101), among others. Separately, the OpenClaw investigation documented 7 additional agent-specific techniques from real incidents, including a CVSS 8.8 one-click RCE vulnerability that exposed 17,500 agent instances. If your SOC is still treating AI agents as regular applications, ATLAS just made it clear that agents are their own attack surface.
The existing OWASP Top 10 for Agentic Applications is a risk checklist for developers. ATLAS is the ATT&CK-style tactical framework that SOC teams, red teams, and threat hunters use to build detection rules. Both matter, but they serve different audiences at different stages.
What MITRE ATLAS Is (and Is Not)
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) grew out of a 2020 collaboration between MITRE and Microsoft, originally called the Adversarial ML Threat Matrix. As of the October 2025 base update, ATLAS maps 15 tactics, 66 techniques, 46 sub-techniques, 26 mitigations, and 33 real-world case studies. The 2026 agentic update pushes those numbers higher still.
How ATLAS Relates to ATT&CK
ATLAS inherits 13 of its 15 tactics directly from ATT&CK (Reconnaissance, Initial Access, Execution, Persistence, and so on) but applies them to AI and ML systems. Two tactics are unique to ATLAS: ML Model Access (AML.TA0004), which covers gaining inference API or artifact access, and ML Attack Staging (AML.TA0012), which covers training data poisoning and backdoor insertion.
Think of it this way: ATT&CK tells you how adversaries move through your network. ATLAS tells you how they move through your AI systems. A Fortune 500 SOC running both frameworks covers the full threat surface. A SOC running only ATT&CK has a blind spot that grows every time the company deploys another agent.
Why the 2026 Update Changes Everything
The pre-2026 ATLAS was mostly about model-level attacks: adversarial examples, training data poisoning, model extraction. The 2026 update shifts focus to the execution layer. The question is no longer just “can an attacker fool the model?” It is “can an attacker weaponize the tools, memory, and configuration that the agent controls?”
That shift tracks with reality. 88% of organizations reported AI agent security incidents in the past year. The attacks are not prompt injections against chatbots. They are prompt injections that trigger tool calls, exfiltrate data through legitimate agent actions, and persist across sessions through memory manipulation.
The 14 New Agentic AI Attack Techniques
Zenity’s contributions to the 2026 ATLAS update fall into distinct categories. Here is every technique with its ATLAS ID and what it actually does in practice.
Context and Memory Manipulation
AI Agent Context Poisoning (AML.T0080) is the foundational technique. An attacker manipulates the LLM’s context window to alter its behavior. Two sub-techniques make this concrete:
Memory Poisoning (AML.T0080.000): Altering an agent’s persistent memory across sessions. Once poisoned, every future interaction is compromised. The ZombieAgent research demonstrated exactly this pattern: a single malicious interaction implants instructions that survive session boundaries and propagate to other agents.
Thread Poisoning (AML.T0080.001): Injecting malicious instructions within an active chat thread. Unlike memory poisoning, this is ephemeral but effective for single-session exploitation.
Configuration and Credential Attacks
Modify AI Agent Configuration (AML.T0081) targets the configuration files and settings that define what an agent can do. Once an attacker changes the config, the modification persists without needing to re-exploit the model itself. This is the AI equivalent of editing a cron job or modifying a startup script.
Credentials from AI Agent Configuration (AML.T0083) extracts API keys, service tokens, and database credentials stored in agent settings. Agents typically need credentials for every tool they access. Many store them in plaintext or weakly protected configuration files.
AI Agent Tool Credential Harvesting (AML.T0098) goes further by retrieving secrets from the tools themselves. An agent connected to a CRM, email server, and cloud storage might hold credentials for all three. Compromising the agent gives access to all of them.
Reconnaissance and Discovery
Discover AI Agent Configuration (AML.T0084) is the reconnaissance technique. It has three sub-techniques:
- Embedded Knowledge (AML.T0084.000): Identifying what data sources the agent can access
- Tool Definitions (AML.T0084.001): Enumerating every tool the agent has available
- Activation Triggers (AML.T0084.002): Finding keywords or workflows that trigger specific agent behaviors
This is the AI-agent equivalent of port scanning. Before an attacker can exfiltrate data through tool invocations, they need to know which tools exist and what triggers them.
Data Access and Exfiltration
Data from AI Services (AML.T0085) covers extracting sensitive information through the agent’s normal access patterns. Two sub-techniques:
- RAG Databases (AML.T0085.000): Prompting the agent to retrieve from internal document stores
- AI Agent Tools (AML.T0085.001): Using the agent’s connected tools to access organizational APIs and services
Exfiltration via AI Agent Tool Invocation (AML.T0086) is particularly insidious. The agent sends data out through its own legitimate tools: sending an email, updating a CRM record, posting to a Slack channel. From a network monitoring perspective, the traffic looks normal because it IS normal agent behavior. The AgentFlayer research from Black Hat 2025 demonstrated this exact pattern against Microsoft 365 Copilot, Salesforce Einstein, and ChatGPT.
AI Service API (AML.T0096) is the “living off the land” technique for AI. Attackers use the AI platform’s own APIs for command and control, making malicious traffic indistinguishable from normal agent operations. The SesameOp backdoor, discovered by Microsoft DART, used the OpenAI Assistants API as a C2 channel for months before detection.
Data Manipulation and Destruction
RAG Credential Harvesting (AML.T0082) extracts credentials stored within RAG-ingested documents. Onboarding guides, runbooks, and internal wikis often contain service credentials. An agent with RAG access can be prompted to retrieve them.
AI Agent Tool Data Poisoning (AML.T0099) places malicious content where agents will invoke it: poisoned database records, manipulated API responses, or crafted documents in shared drives.
AI Agent Clickbait (AML.T0100) targets agentic browsers. Agents that browse the web can be lured into unintended actions through UI elements designed to exploit how agents interpret content. As Zenity’s researchers noted, “agents lack human intuition, skepticism, and situational awareness.”
Data Destruction via AI Agent Tool Invocation (AML.T0101) uses the agent’s own tool capabilities to delete files, drop databases, or destroy records. The agent has the permissions. The attacker provides the intent.
The OpenClaw Investigation: Agent Attacks in the Wild
In February 2026, MITRE’s Center for Threat-Informed Defense published the OpenClaw investigation, analyzing 4 confirmed attack cases against the OpenClaw agentic AI framework. The investigation documented 7 new techniques and three critical attack chains.
CVE-2026-25253: One-Click Agent Takeover
The most severe finding was CVE-2026-25253 (CVSS 8.8): a one-click RCE vulnerability in OpenClaw. The attack worked in three steps:
- Attacker sends a link containing a malicious
gatewayUrlparameter - The frontend JavaScript trusts this parameter and connects via WebSocket, attaching the user’s auth token
- Attacker steals the token and executes arbitrary commands through the
system.runtool
Hunt.io identified 17,500+ exposed instances on the public internet, including OpenClaw, Clawdbot, and Moltbot. These instances stored credentials for Claude, OpenAI, and Google AI. The root cause was simple: code in frontend/src/socket/Gateway.ts directly trusted a URL parameter without validating the gateway origin.
Three Critical Attack Chains
The investigation documented three attack chains that represent distinct threat patterns:
Skill-Based Credential Theft: Publish a malicious skill to the marketplace, evade moderation review, then extract credentials from users who install it. This mirrors the supply chain attacks against npm and PyPI but targets AI agent skill stores.
Prompt Injection to RCE: Inject adversarial instructions, bypass the agent’s execution approval mechanism through command obfuscation or path manipulation, then execute arbitrary system commands. The approval bypass is the critical step. Most agent frameworks have some form of human-in-the-loop confirmation, but the OpenClaw investigation showed multiple ways to circumvent it.
Indirect Injection via Content Poisoning: Compromise a URL or document that the agent regularly fetches, embed adversarial instructions, and wait for the agent to follow them. The agent’s normal workflow becomes the attack vector.
ATLAS vs. OWASP: Different Tools for Different Teams
Both frameworks address agentic AI security, but they serve fundamentally different purposes. Getting this distinction wrong means your security program has gaps.
| Dimension | MITRE ATLAS | OWASP Agentic Top 10 |
|---|---|---|
| Approach | Adversary-centric TTPs | Developer-centric vulnerability list |
| Audience | SOC teams, red teams, threat hunters | Developers, architects, AppSec |
| Granularity | 66+ specific techniques with IDs | 10 high-level risk categories |
| Purpose | Detection rules, threat modeling, red teaming | Secure development, code review |
| When to use | Operations and incident response | Development and deployment phases |
| Incident coverage | 33+ documented real-world cases | Guidance-focused, fewer case studies |
The practical implication: use OWASP during development to build agents securely. Use ATLAS in operations to detect attacks against deployed agents. A security program that only uses one framework is incomplete.
70% of ATLAS mitigations map to existing security controls. That means most SOC teams already have the tooling. What they lack is the detection logic specific to AI agent behaviors. ATLAS provides the technique taxonomy needed to write those rules.
What SOC Teams Should Do Right Now
The 2026 ATLAS update is actionable today. Here is a concrete five-step roadmap, prioritized by impact.
1. Inventory Every AI Agent and Its Permissions
You cannot protect what you cannot see. Map every agent in production, including shadow agents deployed by business units without IT approval. 80% of IT professionals have witnessed agents perform unauthorized actions, usually because nobody knew the agent existed until it broke something.
For each agent, document: which tools it can access, what credentials it holds, what data stores it reads from, and whether it can create sub-agents.
2. Map ATLAS Techniques to Your Infrastructure
Use the ATLAS Navigator to visualize which techniques apply to your agent deployments. Not every technique is relevant to every organization. If your agents do not browse the web, AI Agent Clickbait (AML.T0100) is not a priority. If your agents access CRM data, Exfiltration via Tool Invocation (AML.T0086) is critical.
3. Build Detection Rules for the Top 5 Techniques
Start with these five, ranked by how frequently they appear in documented attacks:
- AML.T0080 (Agent Context Poisoning): Monitor for unusual changes in agent memory or thread state
- AML.T0086 (Exfiltration via Tool Invocation): Alert on tool calls that send data to unexpected destinations
- AML.T0084 (Discover Agent Configuration): Detect systematic probing of agent capabilities
- AML.T0081 (Modify Agent Configuration): Alert on any configuration change outside approved change windows
- AML.T0083 (Credentials from Agent Config): Monitor for credential access patterns that deviate from normal agent operations
4. Run Red Team Exercises Using ATLAS
MITRE provides Arsenal, a CALDERA plugin for automated red team exercises against AI systems. Run it quarterly. The framework evolves as new techniques are discovered, and your detection coverage needs to keep pace.
5. Establish Agent Behavioral Baselines
Static rules catch known attacks. Behavioral baselines catch novel ones. Record what normal agent behavior looks like: which tools get called, in what sequence, at what frequency, with what data volumes. Deviations from baseline are your early warning system, the same approach that works for network anomaly detection, applied to agent telemetry.
Frequently Asked Questions
What is the difference between MITRE ATLAS and MITRE ATT&CK?
ATT&CK maps adversary tactics and techniques for traditional IT infrastructure (endpoints, networks, cloud). ATLAS maps tactics and techniques specific to AI and ML systems, including model attacks, training data poisoning, and agentic AI exploitation. ATLAS inherits 13 of 15 tactics from ATT&CK but adds two AI-specific ones: ML Model Access and ML Attack Staging.
What are the new agentic AI techniques in MITRE ATLAS 2026?
Zenity contributed 14 new techniques in the first 2026 ATLAS update. Key additions include Agent Context Poisoning (AML.T0080), Modify AI Agent Configuration (AML.T0081), Exfiltration via AI Agent Tool Invocation (AML.T0086), AI Agent Clickbait (AML.T0100), and Data Destruction via AI Agent Tool Invocation (AML.T0101). These cover the full attack lifecycle from reconnaissance through execution to data destruction.
How does MITRE ATLAS differ from the OWASP Top 10 for Agentic Applications?
ATLAS is an adversary-centric framework with 66+ specific techniques, designed for SOC teams, red teams, and threat hunters to build detection rules and conduct threat modeling. OWASP’s Agentic Top 10 is a developer-centric list of 10 high-level risk categories for secure development and code review. Use OWASP during development and ATLAS in operations.
What was the MITRE ATLAS OpenClaw investigation?
Published in February 2026, the OpenClaw investigation analyzed 4 confirmed attack cases against the OpenClaw agentic AI framework. It documented 7 new agent-specific techniques and identified CVE-2026-25253 (CVSS 8.8), a one-click RCE vulnerability that exposed 17,500+ agent instances. The investigation mapped three critical attack chains: skill-based credential theft, prompt injection to RCE, and indirect injection via content poisoning.
How should SOC teams use MITRE ATLAS for AI agent security?
Start by inventorying all AI agents and their permissions. Use ATLAS Navigator to map relevant techniques to your infrastructure. Build detection rules for the top 5 techniques (Context Poisoning, Tool Invocation Exfiltration, Agent Discovery, Config Modification, Credential Access). Run quarterly red team exercises using MITRE’s Arsenal CALDERA plugin. Establish behavioral baselines for normal agent activity to detect novel attacks.
