Fifty-nine percent of digital trust professionals expect AI-driven cyber threats to keep them up at night in 2026. Only 13% say their organizations are very prepared to manage the risk. That 46-point gap between awareness and readiness, drawn from ISACA’s survey of nearly 3,000 global professionals, is the single most important number in enterprise security right now.
The problem is not that organizations lack awareness. Everyone knows agentic AI creates new attack surfaces. The problem is that most security teams are still trying to govern autonomous AI systems with controls designed for humans clicking through web apps. That approach has a shelf life measured in months, not years.
What Nearly 3,000 Security Professionals Actually Said
ISACA’s 2026 Tech Trends and Priorities Pulse Poll, fielded between August and September 2025, surveyed professionals across IT audit, risk, governance, and cybersecurity. The responses paint a picture of an industry that can see the threat clearly but cannot move fast enough to counter it.
The top-line findings:
- 63% identify AI-driven social engineering as the most significant cyber threat their organization faces, ahead of ransomware at 54%
- 82% feel only “somewhat,” “not very,” or “not at all” prepared for AI-related risks
- Less than 44% are extremely or very confident their organization could survive a ransomware attack
- 66% view regulatory compliance as very important for 2026
- 79% of IT workers report experiencing burnout
Chris Dimitriadis, ISACA’s Chief Global Strategy Officer, summarized it bluntly: “AI represents both the greatest opportunity and the greatest threat of our time.”
What makes this survey different from the usual vendor-sponsored fear-mongering is the respondent pool. These are not CIOs at a conference checking boxes on a survey card. ISACA’s membership consists of working professionals who implement governance and audit controls daily. When 82% of them say they are not very prepared, that is an operational signal, not a marketing stat.
The Burnout Factor Nobody Discusses
The 79% burnout number deserves its own line item. Splunk’s 2026 CISO Report, which surveyed 650 global CISOs, confirms the trend: 98% cite high alert volumes as a stressor, 94% report false alerts as problematic, and two-thirds of security teams experience moderate to significant burnout. You cannot govern agentic AI with a workforce that is already running on empty.
Agentic AI Creates Threats That Traditional Security Cannot Catch
Traditional cybersecurity assumes a human is at the keyboard. Access controls gate human decisions. Audit logs track human actions. Incident response assumes human-speed attack chains.
Agentic AI breaks all three assumptions simultaneously.
An AI agent can hold valid credentials, operate within approved tools, and still cause catastrophic damage through autonomous decisions that no human reviewed. When Gartner predicts that 25% of enterprise breaches will trace back to AI agent abuse by 2028, they are describing a category of breach that current security stacks are not built to detect.
The OWASP Top 10 for Agentic Applications, released December 2025, draws the critical distinction: “The LLM Top 10 focuses on risks from content generation. The Agentic Top 10 addresses far greater risks from autonomous action.” The top risk, Agent Behavior Hijacking (ASI01), describes scenarios where an attacker manipulates an agent into taking actions that are technically within its permissions but serve the attacker’s goals.
Real Incidents, Not Hypotheticals
This is not theoretical. In early 2026, a critical vulnerability (CVE-2026-25253, CVSS 8.8) was discovered in an open-source agent framework with over 135,000 GitHub stars. Security researchers found 820 of 10,700 marketplace skills were malicious, and over 30,000 instances were exposed to the internet. In a separate incident, Arup lost $25 million to a deepfake-powered fraud where attackers populated an entire video call with AI-generated participants.
Organizations using AI and automation extensively in their security operations cut breach costs by $2.2 million on average. Those without it are paying full price: the average global breach now costs $4.44 million, and shadow AI breaches add an extra $670,000 on top.
The Five-Layer Governance Stack for Agentic AI
Governance is not a single document or policy. For agentic AI, effective governance requires controls at five distinct layers. Gustavo Frega, Senior Manager at ISACA, outlined the core principle in a CSO Online piece: governance by design, not governance as a bureaucratic constraint.
Layer 1: Use Case Approval. Define which use cases are approved, what data agents can access, and which actions require human confirmation. Not every task should be autonomous. The 86% of CISOs in Splunk’s survey who fear agentic AI will increase social engineering sophistication are right to insist on human-in-the-loop for high-stakes decisions.
Layer 2: Identity and Access. AI agents need their own identity layer, separate from human IAM. The Cloud Security Alliance found that only 18% of security leaders trust their current IAM systems to handle AI agents. Agents need scoped credentials with automatic expiry, not shared service accounts.
Layer 3: Runtime Monitoring. Traditional logging tracks what happened. Agentic AI governance requires tracking why something happened. Cisco’s new AI Defense platform, announced at Cisco Live EMEA, includes real-time agentic guardrails and intent-aware inspection of agentic messages. This is the kind of tooling the gap between “somewhat prepared” and “very prepared” requires.
Layer 4: Kill Switches and Boundaries. OWASP’s guidance is direct: establish “rigid operational boundaries, guardrails, and kill switches” before deploying agentic systems. Every agent needs a hard boundary on what it can spend, what systems it can access, and what data it can exfiltrate. Gartner expects 40% of CIOs will demand Guardian Agents to supervise other AI agents by 2028.
Layer 5: Continuous Assessment. Static, upfront risk assessments are insufficient for agentic systems because agent behavior evolves during deployment. Organizations with formal governance policies reduce data leakage incidents by 46%, but only if those policies are continuously validated against actual agent behavior.
Frameworks That Give You a Head Start
Three frameworks provide concrete, actionable guidance for agentic AI governance right now.
NIST AI Agent Standards Initiative (announced February 2026): NIST is building standards across three pillars: industry-led agent standards, open-source protocol development, and research in AI agent security and identity. Their Request for Information on AI Agent Security closed March 9, 2026, with an AI Agent Identity and Authorization concept paper following April 2. If your organization is not tracking NIST’s output here, you are already behind.
OWASP Top 10 for Agentic Applications: Developed by over 100 security researchers, this framework covers agent behavior hijacking, prompt injection, tool misuse, and seven other critical risks. Use it as a checklist for every agent deployment. If you covered LLM risks but not agentic risks, you have addressed content generation but not autonomous action.
ISO 42001: The world’s first AI management system standard, published in 2023, provides a certifiable framework for AI governance. Colorado’s AI Act already recognizes adherence as a potential safe harbor. For European organizations, ISO 42001 alignment maps well to the EU AI Act’s requirements for risk management systems, human oversight, and technical documentation.
The August 2026 Deadline That Changes Everything
The EU AI Act’s high-risk obligations take full effect in August 2026. For organizations deploying AI agents in employment, credit scoring, essential services, or any other high-risk category, the clock is already running.
Only 11% of European respondents in ISACA’s survey feel fully ready for the EU AI Act. Readiness for NIS2 and DORA is marginally better at 18%, which is still dismal. Each EU member state must establish at least one AI regulatory sandbox by August 2, 2026, but organizations cannot wait for sandboxes to figure out their governance posture.
The practical implication: if you are deploying agentic AI in Europe and you have not started a formal governance program, you have roughly five months to build one from scratch. For organizations already managing AI risks, the gap between “somewhat prepared” and “very prepared” is the difference between a structured governance stack and a collection of ad-hoc policies that will not survive an audit.
Gartner’s prediction that over 40% of agentic AI projects will be canceled by end of 2027 due to costs, unclear value, and inadequate risk controls should sharpen the urgency. The projects that survive will be the ones with governance built into the architecture, not bolted on after the first incident.
Frequently Asked Questions
What did the ISACA 2026 survey find about AI cyber threats?
ISACA surveyed nearly 3,000 global digital trust professionals and found that 59% expect AI-driven cyber threats to dominate 2026. However, only 13% say their organizations are very prepared for AI-related risks, creating a 46-point readiness gap.
How is agentic AI security different from LLM security?
LLM security focuses on content generation risks such as hallucinations and data leakage. Agentic AI security addresses the far greater risks from autonomous action, including agent behavior hijacking, tool misuse, and multi-agent cascading failures where agents act within their permissions but cause unintended harm.
What frameworks exist for governing agentic AI?
Three key frameworks are available: NIST’s AI Agent Standards Initiative (covering security, identity, and interoperability), the OWASP Top 10 for Agentic Applications (covering agent behavior hijacking, prompt injection, and tool misuse), and ISO 42001 (the first certifiable AI management system standard). The EU AI Act’s high-risk obligations also take effect August 2026.
What are the biggest AI-driven cyber threats in 2026?
According to ISACA’s survey, 63% of professionals cite AI-driven social engineering as the top threat, followed by ransomware at 54%. Deepfakes, agent supply chain attacks, and autonomous exploitation chains are also emerging as major concerns, with Gartner predicting 25% of enterprise breaches will trace to AI agent abuse by 2028.
When does the EU AI Act affect agentic AI deployments?
The EU AI Act’s high-risk obligations take full effect in August 2026. Organizations deploying AI agents in employment, credit scoring, essential services, or other high-risk categories must comply with requirements for risk management systems, human oversight, technical documentation, automatic logging, and cybersecurity measures. Only 11% of European IT professionals feel fully ready.
