Photo by Tima Miroshnichenko on Pexels (free license) Source

87% of cybersecurity leaders say AI-related vulnerabilities are the fastest-growing risk they face. That finding comes from the World Economic Forum’s Global Cybersecurity Outlook 2026, a survey of 804 leaders across industries, governments, and academia released in January 2026. The number itself is striking, but the more useful data point is the disconnect buried deeper in the report: CEOs rank AI vulnerabilities as their second-highest concern. CISOs do not list it in their top three at all. That gap between boardroom anxiety and security-team priorities is where organizations get blindsided.

The report landed at a specific inflection point. AI adoption has moved past the pilot stage into production deployments that touch core business processes. The security implications are no longer theoretical. They are measurable, and the WEF data provides the most comprehensive global benchmark so far.

Related: Gartner's Top Cybersecurity Trends 2026: Why Agentic AI Oversight Is Trend Number One

AI: The Top Driver of Change and the Top Source of Risk

The WEF report frames AI as a paradox: the single most important cybersecurity enabler and the single most important cybersecurity threat, simultaneously. 94% of respondents identify AI as the most significant driver of change in cybersecurity for 2026. That same technology is the source of the 87% vulnerability concern.

The specific risks have shifted since the 2025 edition. Last year, fears centered on adversarial AI: attackers using LLMs to craft more convincing phishing campaigns, generate malware, or automate reconnaissance. In 2026, the concern has pivoted. Data leaks linked to generative AI (34%) now outweigh fears about adversarial AI capabilities (29%). Organizations are more worried about their own AI tools leaking sensitive data than about attackers wielding AI against them.

That pivot reflects what actually happened during 2025. The headline AI-powered cyberattacks that analysts predicted did materialize, but the more common damage came from inside: employees pasting proprietary code into ChatGPT, agentic AI systems accessing databases without proper permission boundaries, and GenAI tools retaining conversation data that included customer PII. The threat model shifted from “AI weapons in enemy hands” to “AI tools in our own hands doing things we did not authorize.”

The Security Assessment Gap Is Closing (Slowly)

One genuinely positive signal: organizations assessing the security of their AI tools nearly doubled, from 37% in 2025 to 64% in 2026. Of those, 40% now conduct periodic reviews rather than a one-time assessment (24%). That is progress. But 36% of organizations still deploy AI tools with no security evaluation at all. For context, that means more than one in three enterprises is running AI in production without ever asking whether it is secure.

Related: AI Agent Security: The Governance Gap That 88% of Organizations Already Feel

The CEO-CISO Blind Spot

The most actionable finding in the report is not a percentage. It is a misalignment. CEOs and CISOs are reading the same threat landscape and reaching different conclusions about what matters.

CEOs rank cyber-enabled fraud as their top concern and AI vulnerabilities second. They are responding to what they see in the news and what their boards ask about. CISOs, by contrast, prioritize operational risks: supply chain compromise, ransomware, and workforce shortages. They are responding to what they deal with daily.

Neither perspective is wrong. Both are incomplete. When the CEO is worried about AI risk and the CISO is not prioritizing it, budget allocation decisions become misaligned. The CEO funds an “AI governance initiative” that has no teeth because the security team did not design it. The CISO keeps investing in perimeter defense while GenAI tools create new internal attack surfaces that nobody monitors. The result is governance theater: visible spending that does not address the actual risk.

Organizations that closed this gap in the WEF dataset share one pattern: they brought CISOs into AI procurement decisions before deployment, not after incidents. Security evaluation became a gate in the AI adoption pipeline rather than a remediation exercise.

Cyber-Enabled Fraud Overtakes Ransomware

The report’s second major finding will surprise anyone who has spent the last five years treating ransomware as the defining cyber threat. CEOs now rank cyber-enabled fraud as their number one concern, pushing ransomware into a secondary position.

73% of Global Cybersecurity Outlook survey respondents reported that they or someone in their network had been personally affected by cyber-enabled fraud during 2025. That number is not about enterprise attacks. It is about the personal experience of executives, which shapes their risk perception more than any report.

The fraud landscape has changed structurally. Deepfake-enabled impersonation, synthetic voice cloning, and AI-generated business email compromise campaigns have made fraud attacks harder to detect and cheaper to execute. A 2025 incident in Hong Kong where a finance worker transferred $25 million after a deepfake video call with what appeared to be the CFO became a reference case. The technology to execute that attack cost under $10,000.

For security teams, this means fraud prevention is no longer exclusively a financial controls problem. It requires security architecture: identity verification systems that can detect synthetic media, transaction approval workflows that do not rely solely on visual or voice confirmation, and monitoring for AI-generated content in business communications.

Geopolitical Fragmentation and the Confidence Gap

64% of organizations now account for geopolitically motivated cyberattacks in their risk strategies. Among the largest enterprises, that number hits 91%. Among SMBs, it drops to 59%.

The more concerning number: 31% of survey respondents report low confidence in their nation’s ability to respond to a major cyber incident, up from 26% the previous year. Confidence in national cyber resilience is declining at the exact moment when state-sponsored threats are increasing.

The WEF attributes this partly to regulatory fragmentation. 74% of respondents view cybersecurity regulation as generally effective, but cross-border compliance remains a resource drain. Organizations operating across the EU, US, and Asia-Pacific face overlapping and sometimes contradictory requirements. The EU AI Act adds another compliance layer specifically for AI systems, and its interaction with existing cybersecurity frameworks (NIS2, DORA) is still being interpreted by regulators.

For DACH-region companies specifically, the combination of the EU AI Act, NIS2, and DSGVO creates a compliance surface that requires coordinated security and privacy governance. The WEF data suggests most organizations are handling these requirements in silos rather than as an integrated framework.

Related: ISACA Poll: 59% of IT Pros Expect AI-Driven Cyber Threats, but Only 13% Are Ready

Supply Chain Complexity: The Risk Nobody Owns

54% of large organizations identify supply chain complexity as the single biggest barrier to cyber resilience. That number has been climbing steadily since 2022, and AI is accelerating it.

The mechanism is straightforward: every AI tool, model API, and agent framework in an organization’s stack adds dependencies that the security team may not even know about. When a developer integrates an LLM via API, they create a data pipeline that sends potentially sensitive information to a third party. When an AI agent accesses internal tools, it inherits the permissions of whatever service account was configured, often with broader access than any human user would receive.

The WEF report found that 48% of organizations do not have adequate visibility into the security posture of their AI vendors. That tracks with what ISACA found separately: 59% of IT professionals expect AI-driven cyber threats, but only 13% feel their organization is prepared to handle them.

Supply chain risk in the AI era is not about software components in a dependency tree. It is about data flows, model access, and permission inheritance across systems that were not designed to work together.

What the Data Means for Security Strategy

The WEF report is a survey, not a prescription. But the data points toward three strategic shifts that organizations are making in response.

Integrate AI risk into existing cybersecurity frameworks rather than creating parallel governance. The organizations in the WEF dataset with the strongest resilience scores did not build separate AI security programs. They extended their existing risk management, incident response, and vendor assessment processes to cover AI-specific threats. Separate AI governance creates gaps. Integrated governance closes them.

Close the CEO-CISO alignment gap on AI risk. This means regular briefings that translate AI vulnerability data into business impact language for the board, and translating board-level AI concerns into specific security controls for the CISO. Darktrace’s 2026 survey found a similar disconnect: 74% of security leaders see AI-powered threats as significant, but their organizations’ defensive capabilities lag behind.

Treat AI vendor security as a supply chain problem, not a procurement checkbox. Periodic security reviews of AI tools (the 40% who do this are ahead) should include data flow mapping, permission auditing, and incident response testing for AI-specific scenarios like model poisoning, prompt injection, or unauthorized data exposure.

The 87% headline number from the WEF report will get cited in boardroom presentations for the rest of 2026. The more useful number is 36%: the share of organizations still deploying AI with zero security evaluation. That is the gap between knowing AI is a risk and doing something about it.

Frequently Asked Questions

What did the WEF Global Cybersecurity Outlook 2026 find about AI risks?

The WEF surveyed 804 leaders and found that 87% identify AI-related vulnerabilities as the fastest-growing cyber risk. 94% said AI is the most significant driver of cybersecurity change in 2026. The report also found that data leaks from GenAI (34%) now concern organizations more than adversarial AI capabilities (29%).

Why do CEOs and CISOs disagree on AI cybersecurity risk?

CEOs rank AI vulnerabilities as their second-highest cyber risk, while CISOs do not list it in their top three concerns. CEOs respond to board-level discussions and headline risks, while CISOs focus on operational threats like supply chain compromise and ransomware. This misalignment leads to budget and governance decisions that fail to address AI risks effectively.

How many organizations assess the security of their AI tools?

According to the WEF report, 64% of organizations now assess AI tool security, nearly double the 37% in 2025. However, 36% still deploy AI with no security evaluation. Of those who do assess, 40% conduct periodic reviews while 24% only perform a one-time assessment.

What is cyber-enabled fraud and why does it concern CEOs more than ransomware?

Cyber-enabled fraud includes deepfake impersonation, synthetic voice cloning, and AI-generated business email compromise. 73% of WEF respondents reported being personally affected by cyber-enabled fraud in 2025. CEOs now rank it above ransomware as their top concern because AI has made fraud attacks cheaper to execute and harder to detect.

How does the WEF cybersecurity outlook affect EU and DACH-region companies?

DACH-region companies face a unique compliance convergence: the EU AI Act, NIS2, DSGVO, and DORA all impose overlapping requirements for AI security and data protection. The WEF report found 74% view regulation as effective but cross-border compliance drains resources. Most organizations handle these in silos rather than as an integrated framework.