Know Your Agent (KYA) binds every AI agent action to a verified human identity. That single principle is the entire framework. Bot detection, liveness checks, behavioral scoring, and audit trails all serve one purpose: making that binding hold up under regulatory scrutiny and real-world fraud pressure.
Sumsub launched the first commercial KYA implementation in January 2026. The timing was deliberate: their own Identity Fraud Report 2025-2026 documented a 180% year-over-year increase in coordinated multi-step attacks. AI fraud agents are autonomous systems that generate fake documents, bypass verification interfaces, and learn from failed attempts. KYA exists because KYC and KYB were never designed for a world where the entity acting on your platform might not be human.
From KYC to KYA: Why Agent Identity Is a Fraud Problem
KYC (Know Your Customer) verifies that a human is who they claim to be before granting account access. KYB (Know Your Business) does the same for organizations. Both assume the entity being verified is either a person or a registered company.
AI agents are neither. An agent can open accounts, initiate payments, query databases, and interact with verification interfaces at machine speed. In 2025, 40% of companies and 52% of end users reported being fraud victims. AI-driven fraud increased 1,210% according to AU10TIX’s Global Fraud Report. The verification gap became impossible to ignore.
The core insight behind KYA: agent verification is not an extension of IAM. It is an extension of KYC. IAM answers “what can this agent access?” KYA answers “who is accountable when this agent does something wrong?” You need both. But without KYA, an agent can have perfect access controls and still perform actions that nobody is responsible for.
How KYA Works: The Four Components
Sumsub’s KYA framework breaks agent verification into four layers. Other vendors and industry bodies are converging on similar structures, but Sumsub’s remains the most concrete implementation available today.
Agent Detection and Classification
Before you can verify an agent, you need to know it exists. Sumsub’s Device Intelligence layer detects automated activity in real time and classifies it by risk level. Not all bots are threats. A price-comparison agent making API calls is fundamentally different from a credential-stuffing bot trying thousands of login combinations.
The classification determines the verification tier. Low-risk automation (price checks, data aggregation) gets lighter verification. High-risk automation (account creation, payment initiation, personal data access) triggers full KYA verification including human binding.
Agent-to-Human Binding
This is the core innovation. Every agent that crosses the risk threshold must be linked to a verified human identity. The human completes standard KYC verification (document check, liveness verification), and that verified identity becomes the agent’s accountability anchor.
When the agent acts, the action traces back to a specific person. If the agent opens a fraudulent account, the person behind it is identifiable. If the agent accesses restricted data, there is a named individual at the end of the audit trail.
This differs from traditional IAM’s concept of a “human sponsor” or “agent owner.” IAM links an agent to a user account. KYA links an agent to a person verified with government-issued ID. That verification depth is what separates the two approaches.
Continuous Risk Scoring
Static verification is not enough. An agent verified at 9 AM can be compromised by 9:15 AM. Sumsub applies continuous behavioral and contextual risk scoring throughout the agent’s lifecycle.
The system monitors for anomalies: sudden changes in transaction patterns, access to resources outside the agent’s normal scope, geographic impossibilities, and velocity spikes. When risk scores cross defined thresholds, the system can pause the agent’s activity, restrict its scope, or trigger a liveness re-verification of the bound human.
Audit Trail and Evidence Generation
KYA produces what Sumsub calls “evidence-grade” audit trails. Every agent action is logged with the agent’s identity, the bound human’s verified identity, the active permission grant, the risk score, and the action outcome.
Without strong auditability, you cannot prove whether an action was authorized, compromised, or fabricated. This matters because the EU AI Act explicitly requires traceability of AI system actions, and financial regulators are applying similar expectations to agent-initiated transactions.
The Fraud Numbers Driving KYA
The urgency behind KYA is grounded in data. Sumsub’s Identity Fraud Report analyzed over 4 million fraud attempts across 2024-2025.
Multi-step fraud (coordinated schemes involving several stages) grew from 10% to 28% of all identity fraud. That is the 180% year-over-year increase. Meanwhile, the overall fraud rate edged down from 2.6% to 2.2%. Fewer actors, more sophisticated operations. The amateurs are gone. The professionals run AI.
AI-assisted document forgery went from essentially zero to 2% of fraud cases in a single year. Tools like ChatGPT and Grok can now generate convincing fake identity documents on demand.
The real concern for 2026 is what the industry calls “AI fraud agents.” These are not simple bots or scripts. They combine generative AI, browser automation, and reinforcement learning into systems that run entire fraud operations with minimal human involvement. They generate fake IDs, interact with verification interfaces in real time, and refine their tactics based on what failed last time.
The World Economic Forum estimates the AI agents market could reach $236 billion by 2034. That valuation depends entirely on trust infrastructure keeping pace with deployment speed. KYA is a piece of that infrastructure.
Why Blocking All Automation No Longer Works
Most platforms still treat automation as inherently suspicious and block it by default. That worked when bots were simple scrapers. It fails when legitimate businesses deploy agents for procurement, customer service, compliance monitoring, and market analysis.
The average enterprise now runs 12 AI agents across departments, according to Salesforce’s 2026 Connectivity Report. B2B platforms receive a growing mix of human and agent traffic. KYA’s risk-based classification lets platforms allow legitimate agent activity while catching malicious agents, instead of blocking everything and creating friction for paying customers.
KYA and the EU AI Act
The EU AI Act’s August 2, 2026 deadline for high-risk AI systems creates a direct regulatory driver for KYA. Article 14 requires human oversight of high-risk AI systems. The Act’s transparency requirements demand traceability back to responsible natural persons.
KYA maps to three specific requirements.
Human oversight (Article 14). KYA’s agent-to-human binding directly satisfies the requirement that a natural person can be identified as responsible for AI system actions. The binding is verified through identity documents and liveness checks, not merely documented in a policy.
Traceability (Article 12). KYA’s evidence-grade audit trails provide the technical logging the Act demands. Each action records the full identity chain from agent to human, the permission context, the risk score, and the outcome.
Transparency (Article 13). KYA requires that agents be identifiable as automated systems. The detection and classification layer labels agent activity before any verification occurs, preventing agents from passing as human users.
For DACH companies specifically, the overlap between EU AI Act requirements and existing DSGVO obligations around data processing accountability makes KYA doubly relevant. An agent processing personal data triggers both an AI Act compliance obligation and a DSGVO data controller accountability requirement. KYA addresses both by establishing who is responsible at the individual agent level, not just at the organization level.
Building a KYA Practice: What to Do This Quarter
KYA is not a product you buy and install. It is a compliance capability you build over time.
Inventory your agents. List every AI agent in your environment with what it accesses and whether it creates, modifies, or deletes data. Mark agents that interact with external systems or process personal data. Those are your high-risk agents and your first KYA candidates.
Map agents to humans. For each high-risk agent, identify one named person accountable for its actions. Not the team. Not the department. A person. If nobody is willing to own the agent, that agent should not be running.
Implement tiered verification. Not every agent needs full KYA. A read-only analytics agent in a sandbox needs less verification than a payment-initiating agent in production. Use risk-based tiers: low-risk agents get basic registration, medium-risk agents get organizational binding, high-risk agents get full human identity binding with liveness checks.
Start the audit trail now. Even before adopting a full KYA solution, begin logging agent actions with human ownership metadata. When regulators come asking after August 2026, you want months of historical data, not a promise to start logging next quarter.
Choose your stack. Sumsub leads on KYA specifically. Microblink and AU10TIX are building competing capabilities. For the IAM layer, combine KYA with identity platforms like Microsoft Entra Agent ID or Auth0’s Auth for GenAI. KYA handles accountability. IAM handles access. Together they cover the full agent identity lifecycle.
Frequently Asked Questions
What is Know Your Agent (KYA)?
Know Your Agent (KYA) is a risk-based framework for establishing and maintaining trust in AI agents. It works by defining the agent’s identity, binding it to a responsible human through verified identity checks (similar to KYC), and enforcing continuous monitoring, oversight, and auditability across all autonomous actions. Sumsub launched the first commercial KYA implementation in January 2026.
How is KYA different from IAM for AI agents?
IAM (Identity and Access Management) controls what an agent can access through authentication and authorization. KYA controls who is accountable for what an agent does by binding the agent to a verified human identity using government-issued ID and liveness verification. IAM assigns permissions. KYA assigns accountability. You need both for comprehensive agent governance.
Does the EU AI Act require Know Your Agent?
The EU AI Act does not mention KYA by name, but its requirements for human oversight (Article 14), traceability (Article 12), and transparency (Article 13) map directly to KYA’s capabilities. KYA’s agent-to-human binding satisfies Article 14’s requirement for identifying a responsible natural person. The August 2, 2026 deadline for high-risk AI system compliance makes KYA adoption increasingly urgent.
What types of AI agents need KYA verification?
Not every agent needs full KYA verification. A risk-based approach works best: low-risk agents (read-only analytics, data aggregation) need basic registration. Medium-risk agents (internal data processing, report generation) need organizational binding. High-risk agents (payment initiation, account creation, personal data access, external system interactions) need full human identity binding with liveness checks.
How does agent-to-human binding work in practice?
Agent-to-human binding requires a human to complete standard KYC verification, including document checks and liveness verification, before their identity is linked to an AI agent. Every action the agent performs is then traceable to that verified person. If the agent behaves anomalously, the system can trigger a re-verification of the bound human. This creates an accountability chain that survives regulatory audits and fraud investigations.
