Only 15% of AI decision-makers report that their AI investments produced an EBITDA lift in the past 12 months. That number comes from Forrester’s 2026 Predictions report, and it is the single most important data point in the enterprise AI conversation right now. Not the billion-dollar funding rounds. Not the agent framework launches. The fact that 85 out of 100 companies spending real money on AI cannot tie that spend to a single line on their income statement.
Forrester goes further: enterprises will defer 25% of their planned AI spend to 2027 as CFOs take over budget approval from CTOs. The hype period is officially over. What follows is the reckoning.
Every Major Analyst Agrees: Most AI Projects Fail
This is not one pessimistic report. Four independent research organizations, using different methodologies and samples, converge on the same conclusion.
Forrester (October 2025): Fewer than one-third of decision-makers can tie AI value to their organization’s financial growth. The gap between vendor promises and delivered value is widening, forcing a market correction. As Forrester Chief Research Officer Sharyn Leaver stated: “In 2026, the AI hype period ends as the pressure to deliver real, measurable results intensifies.”
Gartner (June 2025): Over 40% of agentic AI projects will be canceled by end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. This specifically targets the autonomous agent projects that vendors have been selling hardest.
RAND Corporation (2024): More than 80% of AI projects fail, at roughly twice the rate of non-AI IT projects. The causes are structural, not technical: poor data quality, misaligned incentives, and scope creep.
MIT (August 2025): 95% of generative AI pilots at companies are failing to deliver their intended value. The MIT researchers found the biggest ROI in back-office automation, not in the customer-facing chatbots that get all the press.
These numbers do not contradict each other. They measure slightly different things (all AI, agentic AI, GenAI pilots) but point in the same direction: the base rate for enterprise AI success is somewhere between 5% and 20%, depending on how you define success.
Why CFOs Are Taking Over AI Budgets
The shift Forrester describes is not about losing faith in AI as a technology. It is about accountability. For two years, CTO offices approved AI spending based on potential. In 2026, CFOs are demanding proof.
The Measurement Gap
Here is the core problem: most companies cannot measure what their AI projects actually produce. Fewer than one-third of AI decision-makers can connect AI value to financial growth, according to Forrester. That is not because AI produces no value. It is because organizations built AI projects without measurement frameworks.
A company deploys a customer service chatbot. Ticket volume drops 30%. But did revenue change? Did customer lifetime value shift? Did the support team that was “freed up” produce anything measurable? Without answers to those questions, the CFO sees a cost center with no clear return.
Vendor Promises vs. Delivered Value
Forrester explicitly calls out a widening gap between what vendors promise and what enterprises experience. The analyst firm notes this is “forcing a market correction to align expectations with reality.” In practical terms, that means:
- Proof-of-concept budgets that produced impressive demos but no production deployments
- Licensing costs that looked manageable for a pilot but ballooned at scale
- Integration timelines that doubled or tripled because enterprise data was messier than expected
The 25% deferral is not companies giving up on AI. It is CFOs saying “show me the receipts before I sign the next check.” That distinction matters.
What the 15% Do Differently
The 15% of decision-makers who report real EBITDA lift share specific patterns that the 85% do not. Based on data from Forrester, BCG, and MIT research, these patterns are consistent enough to be prescriptive.
They Pick Boring Use Cases
MIT found the biggest GenAI ROI in back-office automation: eliminating business process outsourcing, cutting external agency costs, streamlining document processing. Not customer-facing chatbots. Not flashy demos. The companies seeing EBITDA lift automated invoice processing, contract review, internal knowledge retrieval, and compliance reporting.
Klarna’s widely cited savings of $39 million from its AI customer service agent is real, but it worked because customer service at Klarna’s scale is a massive, measurable cost center. The lesson is not “build a chatbot.” The lesson is “automate something expensive that you can measure.”
They Measure Before They Build
Companies with strong data integration achieve 10.3x ROI versus 3.7x for those with poor data connectivity. The successful 15% set up measurement frameworks before the first line of code. They define what “success” means in financial terms (cost reduction, revenue per customer, processing time), establish baselines, and track changes over time.
The 85% typically define success as “we deployed the model” or “accuracy improved on the test set.” Neither of those shows up on a P&L statement.
They Use Vendors for Vertical, Not Horizontal
Specialized vendor-led AI projects succeed approximately 67% of the time, while internal builds succeed only 33%. That gap is enormous. The 15% are not building custom LLM pipelines from scratch. They are buying vertical AI solutions that solve specific, well-defined problems in their industry, then measuring the output against the cost.
This aligns with what LangChain’s State of Agent Engineering survey found: even among the 57% of teams with agents in production, quality remains the number one barrier at 32%. Building it yourself is hard. Building it yourself without dedicated AI engineering expertise is a recipe for the 80% failure bucket.
They Plan for 12-18 Months, Not 90 Days
Successful AI projects typically require 12 to 18 months to demonstrate measurable business value. Most organizations expect results within three to six months. That mismatch kills projects before they have a chance to work.
Forrester’s prediction of deferred spend partly reflects this reality: companies that started AI projects in 2025 expecting quick wins are now pushing timelines (and budgets) into 2027 because the work takes longer than the sales pitch suggested.
What 2026 Actually Looks Like
Forrester’s framing of 2026 as the year AI “moves from hype to hard hat work” is the right metaphor. The technology is not failing. The business model around deploying it is maturing. Here is what that means in practice:
CFO-gated spending. AI budgets will require the same financial scrutiny as any other capital expenditure. “Strategic AI investment” is no longer a blank check.
Consolidation of tools. Companies running 15 different AI experiments will cut to three or four that show measurable results. The agent sprawl problem Gartner warns about will trigger governance reviews.
Mandatory AI training. Forrester predicts 30% of large enterprises will mandate AI training to lift adoption and reduce risk. The bottleneck is shifting from technology to workforce readiness.
Back-office wins over front-office flash. The boring deployments, document processing, compliance automation, internal search, will get funded. The impressive-but-unmeasurable demos will not.
This is healthy. The 2024-2025 AI spending surge funded a lot of experimentation. The 2026-2027 correction funds what actually works.
Frequently Asked Questions
Why is Forrester predicting 25% of AI spend will be deferred to 2027?
Forrester found that fewer than one-third of AI decision-makers can tie AI value to financial growth, and only 15% report EBITDA improvement. CFOs are taking over budget approvals from CTOs and demanding measurable ROI before signing off on new AI investments, causing a quarter of planned spend to be pushed into 2027.
What percentage of enterprise AI projects fail?
Multiple research organizations converge on failure rates between 80% and 95%. RAND Corporation found over 80% of AI projects fail. MIT reports 95% of generative AI pilots fail to deliver intended value. Gartner predicts over 40% of agentic AI projects specifically will be canceled by end of 2027.
What do successful AI implementations have in common?
The 15% of companies seeing EBITDA lift share four patterns: they pick measurable back-office use cases rather than flashy demos, they set up measurement frameworks before building, they use specialized vertical AI vendors rather than building from scratch, and they plan for 12 to 18 month timelines rather than expecting results in 90 days.
Is the AI hype cycle ending in 2026?
According to Forrester, yes. The firm states that 2026 marks the end of the AI hype period as pressure to deliver measurable results intensifies. This does not mean AI is failing as technology. It means the business model around deploying AI is maturing, with CFOs demanding the same financial scrutiny for AI as for any other capital expenditure.
Should enterprises stop investing in AI given these failure rates?
No. The correction is about how companies invest, not whether they should. Companies with strong data integration achieve 10.3x ROI versus 3.7x for those with poor data connectivity. Specialized vendor-led AI projects succeed 67% of the time versus 33% for internal builds. The key is picking the right use cases, measuring rigorously, and planning realistic timelines.
