Only 11% of enterprises run agentic AI in production. That is the headline number from Deloitte’s 17th annual Tech Trends report, published in January 2026. Another 14% are deployment-ready. The remaining 68% are stuck in exploration (30%) or piloting (38%). Deloitte CTO Bill Briggs puts it bluntly: “If you just take your existing workflow and try to apply advanced AI to it, you’re going to weaponize inefficiency.”
This is not the same report as Deloitte’s State of AI survey, which tracked 3,235 leaders on overall AI adoption. Tech Trends focuses specifically on strategy: what separates the 11% who ship agents from the 89% who demo them.
The Adoption Funnel Everyone Needs to See
Deloitte’s adoption funnel for agentic AI is the most useful framework in the report because it shows exactly where enterprises get stuck. Here is how 2,000+ organizations distribute across the pipeline:
| Stage | Share | What It Means |
|---|---|---|
| Exploring | 30% | Evaluating vendor options, running internal assessments |
| Piloting | 38% | Running proof-of-concept with limited scope |
| Deployment-ready | 14% | Infrastructure and governance in place, scaling planned |
| In production | 11% | Agents operating in live business processes |
The bottleneck is not between exploring and piloting. Most organizations get a pilot running without much trouble. The bottleneck is between piloting and deployment-ready: that 38% to 14% drop represents a 63% attrition rate. This is where the strategy gap hits hardest.
The Strategy Deficit
The funnel reflects a deeper problem. 42% of organizations are still developing their agentic strategy roadmap, and another 35% have no formal strategy at all. That is 77% of enterprises pursuing agentic AI without a finalized plan for how to do it.
Compare that with the investment numbers. 75% of companies plan to invest in agentic AI by end of 2026. Token costs have dropped 280-fold over two years, making pilots cheap to spin up. But cheap pilots without strategy produce what Briggs calls weaponized inefficiency: faster execution of broken processes.
The companies stuck in the 38% pilot zone share a pattern. They chose their use case based on what was easy to demo, not what was valuable to automate. They optimized for speed to pilot rather than clarity on what production readiness actually requires.
Five Strategic Questions Before Your First Agent Ships
The most actionable part of the Tech Trends report is a set of five questions Deloitte recommends every enterprise answer before scaling agentic AI. These are not abstract strategic exercises. They are the questions that separate pilot-zone companies from the 11% in production.
1. Which agents to deploy and what functions they perform?
This sounds obvious, but most pilot programs skip it. HPE built an agent called “Alfred” for performance reviews. Alfred is not a single monolithic agent. It uses four specialized sub-agents, each analyzing SQL data across different performance dimensions, then combines their outputs into a structured report. The key decision was not “should we use AI for HR?” It was “which four specific sub-tasks benefit from agent architecture, and where do humans still make the final call?”
2. What are cost profiles versus human employees?
Dell Technologies ran 12 proofs of concept targeting composite processes and required material ROI sign-off before any of them moved to production. That sounds like bureaucracy, but it is the opposite: it forces teams to quantify agent value before scaling costs hit. Gartner projects that 40% of agentic AI projects will be cancelled by 2027 due to unclear ROI. Dell’s approach avoids that.
3. What process automation levels and efficiency targets?
Toyota replaced 50 to 100 mainframe screens with agentic tools for supply chain visibility. The target was not “automate everything.” It was “give operators real-time issue resolution instead of making them navigate a hundred terminal windows.” Specific, measurable, tied to an operational pain point.
4. What is the optimal human-digital workforce mix over four years?
This is where most strategies fail. Moderna created a combined Chief People and Digital Technology Officer role to integrate workforce planning with AI deployment planning. The role exists because optimizing for human headcount and agent capacity separately produces contradictory decisions.
5. Will agents assume entire operational domains beyond five years?
This is the uncomfortable question nobody wants to answer on the record. But Deloitte’s data shows that companies who answer it, even tentatively, make better short-term architectural decisions. If an agent will own the entire accounts payable process in five years, the integration architecture you build today should account for that trajectory rather than treating agents as point solutions.
Governance as Enabler, Not Checkpoint
The conventional take on AI governance is that it slows you down. Deloitte’s data suggests the opposite: the 11% in production treat governance as the thing that enables speed. Without clear decision boundaries, agents cannot operate autonomously. Without audit trails, you cannot trust agent outputs enough to remove human reviewers. Governance is not the brake. It is the road.
The 21% Problem
Here is the disconnect: 73% of companies plan agentic AI deployment within two years, but only 21% have a mature governance model. Singapore addressed this gap in January 2026 by releasing the world’s first governance framework specifically for AI agents, covering four dimensions: risk bounding, human accountability, technical controls, and end-user responsibility.
The California Management Review published a complementary framework in March 2026, proposing a four-layer Agentic Operating Model: cognitive (deploy specialized models, not general-purpose ones), coordination (shift from hub-and-spoke to decentralized collaboration), control (dynamic guardrails with confidence thresholds and escalation triggers), and governance (clear ownership, documented decision boundaries, full traceability).
Three Failure Patterns to Watch
The California Management Review framework identifies three governance anti-patterns that explain most production failures:
The Unbounded Agent: broad system access without clear decision boundaries. The agent can do too much, so when it makes a bad decision, the blast radius is enormous. Mapfre, the Spanish insurer, avoids this by mandating human oversight for all sensitive customer communications, even when the agent handles routine claims administration autonomously.
The Invisible Swarm: multiple agents collaborating without clear ownership or accountability. J.P. Morgan and Goldman Sachs solve this for trading with multi-agent consensus mechanisms where multiple agents must approve high-risk capital commitments before execution. No single agent acts alone on consequential decisions.
The Compliant Failure: governance focused on pre-deployment checklists rather than real-time monitoring. The EU AI Act’s enforcement of high-risk obligations starts August 2, 2026, and it was not designed with agentic AI in mind. Agents evolve in production, but most compliance frameworks assume static systems. Companies preparing for enforcement are building continuous monitoring rather than relying on one-time certifications.
The Infrastructure Layer Nobody Budgets For
Deloitte recommends a set of infrastructure controls that most pilot budgets never include. These are not optional for production:
Digital identity systems with cryptographic transaction receipts. Every agent action needs to be attributable to a specific agent instance. Without this, audit trails are meaningless and incident investigation becomes guesswork.
Immutable action logs. Not traditional application logs. Agent-specific logs that capture the full reasoning chain: what the agent observed, what it decided, what it did, and what happened as a result. These are critical for both debugging and regulatory compliance.
Zero-trust architecture with ephemeral authentication. Agents should not hold persistent credentials. Each action gets its own short-lived authentication token. If an agent is compromised, the blast radius is limited to whatever that token authorizes.
FinOps frameworks for token-based cost monitoring. Token costs dropped 280-fold, but volume scales with agent autonomy. A production agent making thousands of decisions per day can quietly accumulate costs that dwarf the pilot budget. Without real-time cost monitoring, teams only discover the problem when the invoice arrives.
Three interoperability protocols are emerging to connect these systems: Anthropic’s Model Context Protocol (MCP), Google’s Agent-to-Agent Protocol (A2A), and the open-source Agent Communication Protocol (ACP). The convergence of these standards will determine how agents from different vendors interact in production.
What the 11% Actually Do Differently
The pattern across HPE, Toyota, Dell, and Mapfre is consistent. They do not start with “how do we add AI agents?” They start with “which specific process is broken, and does agent architecture solve it better than alternatives?”
They budget for production infrastructure from day one, not after the pilot impresses the board. They build governance alongside the agent, not after deployment. They define decision boundaries before the agent makes its first autonomous decision.
Gartner predicts 40% of enterprise applications will feature task-specific AI agents by end of 2026, up from less than 5% in 2025. The agentic AI market is projected to hit $10.86 billion this year and $93 billion by 2032. The money is flowing. The question is whether it flows into production systems or into more pilots that never ship.
Deloitte’s answer is unambiguous: strategy first, then pilots. Not the other way around.
Frequently Asked Questions
What does Deloitte’s Tech Trends 2026 say about agentic AI adoption?
Deloitte’s Tech Trends 2026 report maps an enterprise adoption funnel: 30% of organizations are exploring agentic AI, 38% are piloting, 14% are deployment-ready, and only 11% have agents running in production. The report identifies strategy gaps as the primary reason for pilot-to-production attrition.
Why do most agentic AI pilots fail to reach production?
According to Deloitte, 77% of enterprises pursuing agentic AI lack a finalized strategy. The bottleneck is between piloting and deployment-ready stages, where a 63% attrition rate occurs. Key blockers include unclear ROI quantification, missing governance frameworks, and organizations automating broken processes instead of redesigning them.
What are Deloitte’s five strategic questions for enterprise agentic AI?
Deloitte recommends enterprises answer five questions before scaling: (1) Which agents to deploy and what functions they perform, (2) What are cost profiles versus human employees, (3) What process automation levels and efficiency targets, (4) What is the optimal human-digital workforce mix over four years, and (5) Will agents assume entire operational domains beyond five years.
How does governance enable faster agentic AI deployment?
Deloitte’s data shows that the 11% of companies with agents in production treat governance as an enabler rather than a checkpoint. Clear decision boundaries let agents operate autonomously. Audit trails enable removing human reviewers from routine decisions. Without governance, teams cannot trust agents enough to give them real autonomy, which keeps projects in permanent pilot mode.
What infrastructure does enterprise agentic AI require beyond the pilot stage?
Production agentic AI requires digital identity systems with cryptographic receipts, immutable action logs capturing full reasoning chains, zero-trust architecture with ephemeral authentication tokens, and FinOps frameworks for real-time token cost monitoring. Three interoperability protocols are emerging: Anthropic’s MCP, Google’s A2A, and the open-source ACP.
