Photo by RDNE Stock Project on Pexels (free license) Source

Nearly four out of five CFOs now have AI agents handling at least a quarter of their finance work. That number comes from Maximor’s 2026 Finance AI Adoption Benchmarking Report, a survey of 100 CFOs at mid-market companies with $50M to $500M in revenue. But buried in the same report is a number that should alarm anyone selling automation to finance teams: only 14% of those CFOs completely trust AI to produce accurate accounting data on its own. And 86% have encountered inaccurate or hallucinated outputs from AI tools in finance tasks.

That is the gap this article is about. Not whether CFOs are adopting AI agents (they are), but whether the agents are ready for the one thing finance cannot compromise on: being right.

Related: AI Agent ROI: What Enterprise Deployments Actually Cost

The Adoption Numbers Are Impressive. The Trust Numbers Are Not.

The headline statistics paint a rosy picture. Deloitte’s Finance Trends 2026 survey found 63% of finance leaders have fully deployed and actively use AI solutions. Wolters Kluwer reports that 44% of finance teams will use agentic AI in 2026, a 600% increase from the prior year. CFOs are spending real money. 78% are actively investing in AI, and 54% of finance chiefs say integrating AI agents will be a digital transformation priority this year.

But the trust data tells a different story. From the same Maximor report:

  • 14% of CFOs completely trust AI for accurate accounting data
  • 41% mostly trust it (with significant caveats)
  • 86% have encountered hallucinated or inaccurate data from AI tools
  • 97% view human oversight as critical; 66% call it “extremely or very critical”

The gap between “we use it” and “we trust it” is enormous. And it gets wider when you look at actual deployment depth. PYMNTS Intelligence research found that only 7% of enterprise CFOs have deployed agentic AI in live finance workflows. Another 5% are running pilots. The 79% number from Maximor likely includes basic analytics and ML-assisted tools, not truly autonomous agents making decisions. When you narrow it to agents that actually act independently, the adoption rate shrinks dramatically.

Why Finance Has a Lower Error Tolerance Than Any Other Function

In marketing, if an AI agent writes a mediocre email subject line, you lose some open rate. In customer service, if a chatbot gives a wrong answer, a human can follow up. In finance, a single misplaced decimal can trigger a restatement. A hallucinated number in a tax filing can mean regulatory action. An incorrect journal entry can cascade through financial statements, investor reports, and audit findings.

This is why 97% of finance leaders say human oversight is critical. The Maximor report’s framing is worth quoting directly: “Human oversight is not resistance. It is responsible adoption.” CFOs want automation that knows when to act and when to pause for judgment.

Related: Human-in-the-Loop AI Agents: When to Let Agents Act and When to Hit Pause

The 86% Hallucination Rate Is the Real Story

Forget the adoption numbers for a moment. The fact that 86% of finance teams have seen their AI tools produce inaccurate or hallucinated data is staggering. In a profession built on verifiable, auditable, traceable numbers, this is not a “trust issue” that better marketing or more demos can solve. It is a fundamental accuracy problem.

The CFA Institute’s analysis of agentic AI for finance identifies the core tension: finance demands “verifiable, traceable, and explainable outputs.” Current large language models provide none of those guarantees natively. They produce probabilistic outputs, not deterministic ones. When you ask an LLM to classify an expense, it gives you its best guess. When you ask it to reconcile accounts, it generates a plausible answer. Plausible is not the same as correct.

Where Hallucinations Actually Hurt

The risk is not that an AI agent will fabricate an entire financial statement. The risk is subtle errors that humans might not catch:

  • A vendor invoice categorized under the wrong GL code, throwing off departmental budgets
  • A tax calculation that uses a rate from a different jurisdiction
  • A cash flow projection that interpolates between data points instead of using actual figures
  • A compliance check that misreads a regulatory threshold by one decimal point

Each of these is individually small. Collectively, they erode the reliability that makes financial data useful. And the more you automate, the more these micro-errors compound before a human reviews the output.

KPMG’s latest survey found that cybersecurity is the single greatest barrier to AI strategy goals in finance, with half of leaders planning $10M to $50M in spending just to “secure agentic architectures, improve data lineage, and harden model governance.” The hidden cost of agentic AI is not the AI itself. It is the verification infrastructure you need to build around it.

The Tools CFOs Are Actually Deploying

Despite the trust gap, real money is flowing into finance-specific AI tools. The difference between tools that are gaining traction and those that are not comes down to one thing: auditability.

Intuit + Anthropic announced a multi-year partnership in February 2026, integrating Anthropic’s Claude Agent SDK directly into the Intuit platform. Mid-market businesses can build custom AI agents across TurboTax, Credit Karma, QuickBooks, and Mailchimp with no coding required. Intuit CTO Alex Balazs described it as “custom AI agents that truly understand their finances, workflows, and industry, and can take action on their behalf.” Rolling out spring 2026.

Basis, an AI-native accounting platform, raised $100M at a $1.15B valuation with investors including Accel, GV (Google Ventures), and former Goldman Sachs CEO Lloyd Blankfein. Thirty percent of the top 25 U.S. accounting firms already use it. The platform automates work that requires 10 to 15 hours of manual labor per tax return, with CPA firms reporting 30% to 50% productivity improvements.

FloQast launched its AI Agent Builder for accounting close management, with ISO 42001 AI certification and Workday GL integration. The no-code agent builder lets accounting teams create custom agents for reconciliation workflows without engineering support.

SAP Joule now includes 15 AI agents for finance, HR, and supply chain. The Cash Management Agent alone saves up to 70% of time on manual cash positioning. Joule Studio reached general availability in Q1 2026, letting enterprises customize agent behavior within SAP’s governance framework.

The Pattern: Embedded Beats Standalone

The tools gaining CFO trust share a common trait: they are embedded in systems finance teams already use. FloQast agents work inside the close management workflow. SAP Joule agents operate within the ERP. Intuit agents run inside QuickBooks. Nobody is asking a CFO to adopt a new standalone AI platform and route sensitive financial data to it. The agents come to where the data already lives.

This matters because 86% of CFOs cited legacy tools as a significant or moderate barrier to AI adoption. Embedded agents sidestep the integration problem entirely.

Related:
Related: Goldman Sachs and Anthropic Build AI Agents for Wall Street's Back Office

The Strategic Time Lie

Here is the most underreported finding in the Maximor data. Ninety-six percent of CFOs agree that AI enables more time for strategic work. This is the promise every vendor makes: automate the routine, free up the human for higher-value thinking.

But only 27% of CFOs actually spend half or more of their time on strategy. The other 69% still spend at least half their time on day-to-day operations, even with AI agents handling chunks of their workload.

What happened? The automation freed up time, but operations expanded to fill it. New reporting requests. More granular dashboards. Faster close cycles that create expectations for even faster ones. The AI did not buy strategic time. It raised the baseline of what “operational” means.

This connects directly to the ROI problem Deloitte identified: only 21% of finance leaders report clear, measurable ROI from AI. If the freed-up time just gets consumed by more operational work, the ROI never materializes in the way the business case promised. The hours saved are real. The strategic value of those hours is not.

The Skills Gap Compounds the Problem

Even when CFOs carve out strategic time, only 47% believe their teams are equipped to use AI tools effectively, despite 78% actively investing. Deloitte found that 64% of finance leaders plan to prioritize AI, automation, and data analysis skills in their hiring. The gap between buying the tools and having people who can use them properly is where most of the trust deficit lives.

The role of “accountant” is shifting toward what the industry is calling an “AI supervisor”: someone who reviews agent output, handles exceptions, and makes judgment calls the agent cannot. This is not a future prediction. It is the operating model that the 14% who fully trust their AI tools have already built.

Governance: 74% Plan to Deploy, 21% Have Governance

The Deloitte State of AI in Enterprise report found that 74% of organizations plan to deploy agentic AI within two years. Only 21% have mature governance models for autonomous agents. AuditBoard reports that over 80% of organizations are concerned about AI risks, but only 25% have fully implemented governance programs.

That is a 53-percentage-point gap between intent and readiness. And it is getting wider, not narrower, because deployment timelines are accelerating faster than governance programs can mature.

What “Governance” Actually Means for Finance AI

For a finance team, AI governance is not an abstract compliance exercise. It means:

  • Audit trails: Every agent action must be traceable. When an AI agent reclassifies a transaction, you need to know why, what data it used, and what its confidence level was.
  • Approval workflows: Define which actions agents can take autonomously and which require human sign-off. A $500 invoice categorization might be autonomous. A $500,000 journal entry should not be.
  • Error budgets: Set explicit thresholds for acceptable error rates by task type. If your reconciliation agent exceeds a 2% error rate, it should automatically escalate to human review.
  • Model versioning: Track which model version produced which outputs. When you update an agent, you need to know if the new version’s outputs differ from the old one’s on the same inputs.

The companies getting governance right are not building it as a separate layer. FloQast’s ISO 42001 certification, SAP Joule’s embedded governance framework, and Intuit’s “trusted financial intelligence” positioning all reflect the same insight: governance has to be built into the agent platform, not bolted on after deployment.

One data point that should give every CFO pause: KPMG found that enterprise agentic AI deployment dropped from 42% to 26% in a single quarter. Companies are pulling back from rushed pilots to focus on getting governance right before scaling. This is not a retreat from AI. It is a recognition that moving fast without controls in finance is how you end up on the front page for the wrong reasons.

Frequently Asked Questions

What percentage of CFOs use AI agents in accounting?

79% of CFOs report that AI agents handle at least 25% of their finance workload, according to Maximor’s 2026 Finance AI Adoption Benchmarking Report. However, only 7% have deployed truly autonomous agentic AI in live finance workflows, per PYMNTS Intelligence research.

Can CFOs trust agentic AI with financial data?

Only 14% of CFOs completely trust AI for accurate accounting data on its own. 86% of finance teams have encountered inaccurate or hallucinated data from AI tools. The industry consensus is that human oversight remains critical, with 97% of finance leaders viewing it as essential for accuracy.

What are the biggest barriers to AI adoption in finance?

The top barriers are data trust and reliability (35% cite it as the top barrier to ROI), legacy systems that cannot support modern AI (86% of CFOs cite this), skills gaps (only 47% believe their teams are equipped), and governance immaturity (only 21% have mature AI governance models).

Will AI replace accountants and finance professionals?

AI agents are shifting the accountant role toward an “AI supervisor” model, not eliminating it. Accountants increasingly review agent output, handle exceptions, and make judgment calls that agents cannot. 96% of CFOs say AI enables more strategic work, but the human oversight requirement means finance professionals remain essential.

How much does AI governance cost for finance teams?

KPMG found that half of finance leaders plan to spend $10M to $50M to secure agentic architectures, improve data lineage, and harden model governance. The governance infrastructure around finance AI often costs more than the AI tools themselves.