Photo by Karolina Kaboompics on Pexels Source

The threat model for digital commerce is about to flip. For two decades, the primary attack vector was stolen credentials: credit card numbers, passwords, session tokens. By 2028, the primary attack vector will be stolen or manipulated AI agents. Gartner predicts that 25% of enterprise breaches will be tied to AI agent exploitation, and that agents will reduce the time to exploit account exposures by 50%. The digital commerce stack has a shopping layer, a payment layer, and now it desperately needs a trust layer.

That trust layer is being built right now, but it is fragmented across competing standards and missing a critical piece: legal clarity on liability when an agent acts beyond its mandate.

Related: Agentic Commerce: How AI Shopping Agents Replace Search

The Trust Gap That Nobody Planned For

AI agents can already browse product catalogs, compare prices, and pick the right item. Google’s Universal Commerce Protocol, Stripe’s Machine Payments Protocol, and Coinbase’s x402 handle the shopping and payment mechanics. The missing piece is identity: when an AI agent shows up at a merchant’s checkout, how does the merchant know that agent is authorized to act for you, is operating within the limits you set, and is actually the agent it claims to be?

A PYMNTS Intelligence and Trulioo survey of 350 global risk, compliance, and fraud leaders found that 56.3% of firms face threats from bots or agents, and 58.6% reported bot-fueled fraud struggles. Nearly 90% of enterprises said bot management is a major challenge. Companies lose an average of 3.1% of annual revenue to digital identity system gaps, adding up to $94.9 billion in aggregate annual losses among surveyed firms.

The old digital identity stack, KYC (Know Your Customer) and KYB (Know Your Business), was designed for humans and companies. It verifies that a person is who they claim to be, or that a business is legitimately registered. Neither framework has a concept of verifying that an autonomous software agent is authorized to act on behalf of a specific human within specific parameters. That gap is where the Know Your Agent framework comes in.

Know Your Agent: The Third Identity Layer

The KYA framework, proposed by PYMNTS Intelligence and Trulioo in March 2026, adds a third layer to the identity verification stack. Where KYC verifies a person and KYB verifies a business, KYA verifies an agent. It answers four questions that traditional identity systems cannot:

  1. What is this agent? Establishing the agent’s identity, its creator, its version, and its operational history.
  2. What is it allowed to do? Confirming the specific permissions the agent holds and who granted them.
  3. Who is accountable? Maintaining a clear chain of responsibility from the agent back to a human or organization.
  4. Is it behaving within bounds? Continuously monitoring whether the agent’s actions stay within approved parameters.

The practical implementation is a concept called the Digital Agent Passport: a lightweight, tamper-proof identity credential that travels with the agent across interactions. Think of it as a machine-readable authorization document that merchants, payment networks, and platforms can verify in real time.

Prove’s Verified Agent System

Identity verification company Prove launched its Verified Agent product in October 2025, targeting what it calls the “$1.7 trillion agentic commerce market opportunity.” The system combines credential issuance and verification, token provisioning, auditable transaction logs with co-signed records, and a shared trust registry. Instead of SMS one-time passwords (which an agent cannot receive), Prove uses passive multi-factor authentication, session-level authorization limits, and cryptographic proof. Prove is trusted by 19 of the top 20 U.S. banks and over 1,500 brands.

The World ID Integration

Sam Altman’s World project took a different approach. In March 2026, World integrated with Coinbase’s AgentKit to let agents carry cryptographic proof that a verified human authorized them, using zero-knowledge proofs. The goal: prevent one person from deploying thousands of agents to game fee-based systems, without revealing the human’s actual identity.

How Tokenized Trust Actually Works

Tokenization is not new. Your phone has been tokenizing your credit card for Apple Pay since 2014. The innovation for agentic commerce is extending tokenization from “protecting a card number” to “authenticating an agent’s identity and intent.” Three competing approaches have emerged in the past six months.

Visa’s Trusted Agent Protocol

Visa launched its Trusted Agent Protocol in October 2025 with 10+ launch partners. Each agent receives an agent-specific cryptographic signature that bundles three things: information about the agent’s intent, consumer recognition data, and payment credentials. The merchant receives a single package that answers “what does this agent want to do?” and “is a real consumer behind it?” in one verification step.

By December 2025, hundreds of secure agent-initiated transactions had been completed. Visa now has 100+ partners across the commerce ecosystem, 30+ actively building in the VIC sandbox, and 20+ agent platforms integrating. Partners include Skyfire, Nekuda, PayOS, Ramp, Consumer Reports, Price.com, and Akamai. Visa predicts millions of consumers will use AI agents to complete purchases by the 2026 holiday season.

Mastercard’s Agentic Tokens and Verifiable Intent

Mastercard’s Agent Pay takes a different architectural approach. Instead of a new credential type, Mastercard issues Agentic Tokens: dynamic, cryptographically secure credentials that extend the same tokenization technology used in mobile contactless payments, secure card-on-file, and Payment Passkeys. The tokens use a Dynamic Token Verification Code formatted for standard card payment fields, meaning merchants do not need to change their payment infrastructure. No new code needed.

The more interesting piece is Verifiable Intent, built with Google. It creates a tamper-resistant record of what a user authorized versus what the agent actually did. If your agent was instructed to “buy running shoes under $120” and instead purchased a $300 pair, Verifiable Intent provides the cryptographic evidence to prove the agent exceeded its mandate. That evidence could be decisive in a dispute or chargeback.

Related: Visa vs. Mastercard: The Race to Control How AI Agents Pay

Cloudflare’s Web Bot Auth

Cloudflare’s Web Bot Auth operates at the infrastructure layer rather than the payment layer. The protocol uses HTTP Message Signatures with public key cryptography to give each agent a stable, verifiable identifier. Every request is time-based and non-replayable, meaning a stolen request cannot be reused. Cloudflare partnered with Visa, Mastercard, and American Express to ensure agents built with the Cloudflare Agents SDK can shop autonomously at millions of merchants globally.

This matters because authentication has to happen before payment. If a merchant cannot distinguish a legitimate AI agent from a malicious bot at the HTTP request level, no amount of payment-layer tokenization will help. Cloudflare’s position in front of roughly 20% of all web traffic makes this a natural choke point for agent verification.

The Liability Gap Nobody Wants to Talk About

The trust infrastructure is being built. The legal infrastructure is not. No jurisdiction has enacted regulation specifically addressing autonomous AI purchasing. When an AI agent makes an unauthorized or harmful purchase, who is liable? The consumer who deployed the agent? The company that built the agent? The payment network that processed the transaction? The merchant that accepted it?

Clifford Chance identified a “liability gap” that existing contracts may not cover. In the EU, the AI Act, PSD3, GDPR, and Consumer Rights Directive overlap without clearly answering this question. Zac Cohen, Trulioo’s Chief Product Officer, put it bluntly: “That’s the big sticking point for a lot of these transactions to really take off.”

Visa has extended its zero-liability guarantee to AI agent-initiated transactions, meaning cardholders will not be held responsible for unauthorized charges regardless of whether a human or an agent made the purchase. That protects consumers but pushes the liability question downstream to merchants and agent providers. If a compromised agent makes fraudulent purchases, someone other than the cardholder will absorb the cost. The industry has not agreed on who that is.

The Fiserv-Mastercard partnership points toward one model. Fiserv’s integration evaluates whether an AI agent has been approved to act on behalf of a user and whether the transaction falls within predefined parameters before processing it. If the agent exceeds its authorization, the transaction is blocked before it reaches the payment network. That shifts liability management from after-the-fact disputes to pre-transaction gatekeeping.

Related: B2A (Business-to-Agent): Why Your API Documentation Is Now Your Landing Page

What This Means for Businesses

The companies that benefit most from the trust layer buildout are not the ones selling AI agents. They are the ones whose infrastructure sits between agents and transactions: payment processors, identity verification providers, and web infrastructure companies. Fiserv, Prove, Trulioo, and Cloudflare are positioned as the gatekeepers of agentic commerce, not because they build the agents but because they verify them.

For merchants, the actionable takeaway is straightforward: integrate with at least one agent authentication system before the 2026 holiday season. Visa’s SVP of Product Rubail Birwadker said it plainly: “This holiday season marks the end of an era.” Merchants who cannot accept verified agent transactions will lose sales to merchants who can, the same way merchants who could not accept mobile payments lost ground a decade ago.

For enterprises deploying AI agents internally, the KYA framework provides a useful checklist: can you identify every agent operating in your environment, define what each agent is permitted to do, trace every agent action back to a human authorizer, and monitor agent behavior in real time? If any answer is no, you have a trust gap.

Frequently Asked Questions

How do AI agents authenticate themselves for digital commerce transactions?

AI agents use tokenized authentication systems like Visa’s Trusted Agent Protocol or Mastercard’s Agentic Tokens. These systems issue cryptographic credentials that bundle the agent’s identity, its authorization scope, consumer recognition data, and payment information into a single verifiable package. Infrastructure providers like Cloudflare add HTTP-level authentication using message signatures and public key cryptography.

What is the Know Your Agent (KYA) framework?

KYA is a third identity verification layer proposed by PYMNTS Intelligence and Trulioo that sits alongside KYC (Know Your Customer) and KYB (Know Your Business). It verifies four things about an AI agent: what the agent is, what it is permitted to do, who is accountable for its actions, and whether it is operating within approved parameters. The practical implementation includes Digital Agent Passports and continuous behavioral monitoring.

Who is liable when an AI agent makes an unauthorized purchase?

This remains legally unresolved. Visa has extended its zero-liability guarantee to cover AI agent-initiated transactions, protecting cardholders. But liability for merchants, AI agent providers, and platforms is undefined. No jurisdiction has enacted regulation specifically addressing autonomous AI purchasing. Law firm Clifford Chance has identified a liability gap that existing contracts may not cover.

What is the difference between tokenized payments and tokenized trust for AI agents?

Tokenized payments replace sensitive card numbers with secure tokens to prevent data theft during transactions. Tokenized trust goes further: it authenticates the AI agent’s identity, verifies its authorization scope, and cryptographically records what the agent was instructed to do versus what it actually did. Mastercard’s Verifiable Intent system, for example, creates tamper-resistant records of agent mandates.

How can businesses prepare for AI agent commerce?

Businesses should integrate with at least one agent authentication system (Visa Trusted Agent Protocol, Mastercard Agent Pay, or Cloudflare Web Bot Auth) before the 2026 holiday season. Enterprises deploying agents internally should implement a KYA framework: identify every agent, define permissions, maintain human accountability chains, and monitor agent behavior continuously.