Photo by Tima Miroshnichenko on Pexels Source

Ninety-seven percent of security leaders say AI strengthens their defenses. Ninety-two percent are worried about AI agents in their workforce. Both numbers come from the same survey: Darktrace’s State of AI Cybersecurity 2026, which polled 1,540 cybersecurity leaders and practitioners across 14 countries between October and November 2025. That contradiction is not a sign of confusion. It is the most honest reading of where enterprise security stands: AI is simultaneously the best tool in the SOC and the newest threat walking through the front door.

This is not another general-purpose “AI in cybersecurity” overview. We have covered the offensive/defensive arms race in detail. The Darktrace survey adds something different: practitioner sentiment data from 1,540 people who actually run security operations, revealing where confidence ends and anxiety begins.

Related: AI Agents in Cybersecurity: Offense, Defense, and the Arms Race

The Survey at a Glance: Five Numbers That Tell the Story

Darktrace partnered with AimPoint Group to field this survey across the U.S., UK, Germany, Australia, Singapore, Japan, and eight other countries. Respondents included CISOs, security directors, SOC managers, and frontline practitioners. Five findings stand out.

73% say AI-powered threats already have a significant impact. Not “might have” or “will eventually.” Already. Nearly three-quarters of security professionals report that AI-driven attacks are materially affecting their organizations today.

87% say AI is increasing the volume of threats requiring attention. AI does not just make individual attacks more sophisticated. It multiplies the number of incidents that need investigation, creating a throughput problem that human-only teams cannot sustain.

92% are concerned about AI agents in the workforce. This goes beyond security tool concerns. When marketing deploys an AI agent that accesses the CRM, or finance sets up an autonomous invoice processor, security teams see risk they did not authorize and cannot fully monitor.

97% believe AI strengthens their own defensive capabilities. The near-unanimity here is remarkable. Almost every security leader surveyed sees AI as a force multiplier for detection and response.

Only 14% allow AI to take autonomous remediation actions. Despite trusting AI for detection and analysis, the vast majority keep a human in the loop for any actual response. The gap between what security leaders believe AI can do and what they let it do is 83 percentage points wide.

Why These Numbers Contradict Each Other

The contradiction is structural, not irrational. Security teams trust AI to watch. They do not trust AI to act. The 97% confidence in AI defense refers primarily to threat detection, anomaly identification, and alert triage. The 14% autonomous remediation figure reveals that trust evaporates the moment AI moves from observation to intervention.

This mirrors what the Cloud Security Alliance’s Agentic Trust Framework addresses: the gap between granting an AI agent read access (low risk, high value) and granting it write access (high risk, high consequence). Most organizations have settled on a pragmatic middle ground: let AI flag problems at machine speed, but require a human to pull the trigger.

Agentic AI as the Expanding Attack Surface

The most forward-looking section of the Darktrace report focuses on AI agents, and the numbers reveal genuine apprehension. 76% of security professionals are worried about integrating AI agents into their organizations. Among security executives specifically, 47% say they are very or extremely concerned about AI agents operating with direct access to sensitive data and critical business processes.

This is not abstract. As Dark Reading reports, nearly half (48%) of respondents believe agentic AI will represent the top attack vector for cybercriminals and nation-state threats by end of 2026. Three specific properties of AI agents make them uniquely dangerous from a security perspective:

Agents Act with Employee-Level Access

An AI agent processing invoices typically needs access to the same systems a human accounts payable clerk uses: ERP, banking portals, vendor databases. But unlike a human clerk, the agent never logs out, never questions unusual requests (unless explicitly programmed to), and processes transactions at machine speed. If compromised, an agent can exfiltrate data or authorize fraudulent transactions faster than any insider threat in history.

Agents Inherit and Amplify Privileges

The Gravitee State of AI Agent Security report (which we analyzed separately) found that 25.5% of deployed agents can spawn other agents. Each child agent inherits its parent’s permissions, and in many deployments, credentials are shared via static API keys. One compromised agent becomes a chain of compromised agents.

Agent Behavior Is Hard to Distinguish from Legitimate Activity

When an AI agent accesses a database, the audit log looks identical to a human user performing the same action. Security tools built around behavioral baselines struggle because agent behavior is inherently “normal” by design. The agent was given access to do exactly what it is doing. Detecting when it starts doing something it should not requires understanding intent, not just action.

Related: Zero Trust for AI Agents: Why 'Never Trust, Always Verify' Needs a Rewrite

The SOC Reality: Generative AI Everywhere, Autonomy Nowhere

Generative AI is now embedded in 77% of security stacks, according to the Darktrace survey. That adoption rate is extraordinary for a technology category that barely existed three years ago. But how that AI is being used reveals a deliberate constraint.

What AI Actually Does in the SOC Today

The vast majority of AI deployments in security operations fall into three categories:

Alert triage and prioritization. AI reviews thousands of alerts, filters noise, and surfaces the 2-3% that require human investigation. This is the highest-ROI use case because it directly addresses the throughput problem: 87% of respondents say AI is increasing threat volume, and human analysts cannot scale to match.

Threat intelligence synthesis. AI agents correlate indicators of compromise across multiple feeds, map attack patterns to MITRE ATT&CK frameworks, and generate briefings that would take a human analyst hours. This frees senior analysts to focus on response strategy rather than data aggregation.

Investigation assistance. AI tools like CrowdStrike’s Charlotte AI or Microsoft Security Copilot can answer natural language queries about an organization’s security posture, generate detection rules, and walk junior analysts through incident timelines.

What AI Is Not Doing

The flip side of the 14% autonomous remediation figure: 86% of organizations require human approval before AI can isolate a compromised endpoint, block a suspicious IP, revoke a credential, or quarantine a file. Every one of those actions introduces latency. As we covered in our cybersecurity arms race analysis, attackers using AI agents compressed a full ransomware campaign from 9 days to 25 minutes. Defenders who need a human to approve every response action cannot match that speed.

The 85% preference for MSSPs for SOC services reflects this tension. Organizations want the benefits of 24/7 AI-powered monitoring without the risk of giving AI autonomous authority. Outsourcing to an MSSP is, in effect, buying human oversight as a service while still benefiting from AI-driven detection.

What Security Leaders Get Wrong About the Threat

The Darktrace report surfaces a knowledge gap that compounds the strategy gap. Some security professionals lack a nuanced understanding of the types of AI used in their security stack. They know they use “AI” but cannot distinguish between rule-based detection, machine learning anomaly models, and large language model integrations. Without that understanding, they cannot evaluate vendor claims, architect appropriate guardrails, or make informed decisions about where autonomous action is safe.

This matters because the AI threat landscape is not monolithic. A phishing email generated by an LLM is qualitatively different from an autonomous agent conducting multi-step lateral movement. The defenses required are different, the risk calculus is different, and the monitoring approach is different. Lumping all “AI threats” into a single category leads to either over-investment in detection (catching AI-generated phishing with AI-powered email filters) or under-investment in the harder problems (monitoring AI agent behavior at the identity and authorization layer).

The Governance Gap, Quantified

The Darktrace data aligns with what Deloitte’s State of AI report found: only 21% of enterprises have AI governance in place, even as 74% plan to deploy agentic AI within two years. The Gravitee survey puts it more starkly: 47% of deployed AI agents operate outside any governance framework, and 45.6% of organizations still use shared API keys for agent-to-agent authentication.

Three surveys, three different methodologies, one consistent finding: organizations are deploying AI agents faster than they are governing them. The Darktrace report’s 92% concern figure is high, but concern without corresponding action is just anxiety.

Related: AI Agent Sprawl: Why Half Your Agents Have No Oversight

What to Do With This Data

The Darktrace survey is valuable precisely because it captures practitioner reality rather than vendor aspiration. Here is what the data actually supports:

Start with identity. 92% of respondents worry about AI agents, but the underlying problem is identity governance. Every AI agent needs its own identity, its own scoped credentials, and its own audit trail. Shared API keys and inherited human credentials are the root cause of most agent-related security failures.

Automate detection, gate remediation. The 97%/14% split is defensible for most organizations at this maturity level. Let AI handle alert triage and investigation, but require human approval for high-impact remediation actions. As confidence builds and agent behavior becomes more predictable, gradually extend autonomous authority using a progressive autonomy model.

Know your AI stack. If your security team cannot explain the difference between the ML models in your SIEM and the LLM in your SOC assistant, they cannot evaluate whether a new agentic AI threat requires a new defense. Invest in AI literacy for security teams, not just AI tools.

Monitor your own agents, not just attackers’. The 76% who worry about AI agent integration often focus on external threats. The bigger near-term risk is your own agents: the marketing bot with CRM access, the coding assistant with repo write permissions, the HR agent processing personal data. Start an agent inventory before the next audit.

Related: CSA Survey: 82% of Enterprises Can't Trust Their IAM for AI Agents

Frequently Asked Questions

What is the Darktrace State of AI Cybersecurity 2026 report?

The Darktrace State of AI Cybersecurity 2026 is a survey-based report conducted in partnership with AimPoint Group. It polled 1,540 cybersecurity leaders and practitioners across 14 countries between October and November 2025, covering topics including AI-powered threats, agentic AI risk, SOC automation, and security tool adoption.

What percentage of security leaders are concerned about AI agents?

According to the Darktrace survey, 92% of security leaders are concerned about the security implications of AI agents in the workforce. More specifically, 76% are worried about integrating AI agents into their organization, and 47% of executives say they are very or extremely concerned about agents accessing sensitive data and business processes.

Why do only 14% of organizations allow AI autonomous remediation in the SOC?

Despite 97% of security leaders believing AI strengthens their defenses, only 14% allow AI to take independent remediation actions without human oversight. This gap reflects a trust boundary: organizations trust AI for detection and analysis but not for high-impact response actions like isolating endpoints or revoking credentials, where a false positive could disrupt operations.

How are AI-powered threats impacting organizations in 2026?

73% of security professionals report that AI-powered threats are already having a significant impact on their organization, and 87% say AI is increasing the number of threats requiring attention. The primary concerns include AI-generated phishing at scale, autonomous multi-step attacks, and the expanding attack surface created by enterprise AI agent deployments.

What is the biggest agentic AI security risk for enterprises?

The biggest risk is the combination of employee-level access with machine-speed execution and minimal human oversight. AI agents operate with direct access to sensitive data and business processes, but unlike human employees, they do not question unusual requests, never log out, and can be compromised without triggering behavioral anomaly detection. 48% of respondents believe agentic AI will be the top attack vector by end of 2026.