Photo by Christian Lue on Unsplash (free license) Source

The European Data Protection Supervisor just told every AI agent developer exactly what scares regulators most. In its TechSonar 2025-2026 report, published November 24, 2025, the EDPS identified agentic AI as one of six emerging technologies warranting immediate data protection scrutiny, and then listed 12 specific risks that conventional GDPR compliance programs do not adequately address. This is not theoretical guidance from a think tank. The EDPS supervises data protection compliance for every EU institution, body, office, and agency. When they flag a technology, enforcement priorities follow.

What makes the EDPS guidance distinct from the general GDPR framework for AI agents is specificity. The EDPS did not repeat generic principles about data minimization and purpose limitation. Instead, they mapped exactly where agentic systems break those principles: autonomous purpose expansion, unpredictable data gathering, cascading bias across multi-agent chains, and accountability fragmentation across provider layers. If you are building or deploying AI agents in Europe, this is the most concrete regulatory signal you will get before the EU AI Act’s high-risk provisions become enforceable on August 2, 2026.

Related: GDPR and AI Agents: Data Protection When Machines Make Decisions

The 12 Risks the EDPS Identified for AI Agents

The EDPS maintains a dedicated TechSonar page on agentic AI that goes beyond its published report. The analysis draws a distinction that matters: AI agents are single systems that autonomously perform tasks and use tools, while agentic AI refers to coordinated multi-agent systems handling larger objectives. Both carry data protection risks, but multi-agent architectures multiply them.

Here are the 12 risks the EDPS flagged, grouped by the GDPR principle they threaten.

Extensive Data Access and Unpredictable Gathering

Agents embedded in operating systems or enterprise platforms require broad access to data stores, APIs, and communication channels. The EDPS notes that it becomes “challenging to determine in advance what personal data is gathered, how it is used, and for what specific purposes.” This directly undermines GDPR Article 5(1)(b), the purpose limitation principle, because the processing chain emerges at runtime rather than being defined in advance.

The practical problem: a recruiting agent that starts by reading a job description might end up accessing calendar data, internal chat messages, and third-party salary benchmarks within a single task execution. No privacy notice covers that scope because nobody predicted it.

Autonomous Purpose Expansion and Comprehensive Profiling

The EDPS specifically warns that AI agents “might autonomously determine new uses for personal data.” Unlike a traditional system where a developer codes each processing step, an agent with tool access can reason its way into processing activities nobody designed. Combined with the risk of comprehensive profiling, where “personal data aggregated from diverse sources may be combined in unforeseen ways, potentially without user consent,” this creates a compliance gap that no static Data Protection Impact Assessment can close.

The Spanish AEPD’s 71-page guidance on agentic AI, published February 2026, reinforces this point: “When properly built, agentic systems can become privacy-enhancing technologies; poorly built, they become breach sources.”

Transparency Failure and Rights Implementation

Two of the most actionable EDPS findings relate to transparency and data subject rights. The EDPS states it is “difficult for users to understand how personal data would be used, what conclusions would be drawn from personal data and why certain actions would be taken on their behalf.” That is a direct flag on GDPR Articles 13 and 14, the information obligations that require clear disclosure of processing purposes.

Worse, implementing data subject rights like access and erasure becomes “very difficult to achieve” when data flows through multiple agents, each maintaining separate context windows and memory stores. If a user requests deletion under Article 17, you need to identify and purge their data from every agent’s memory, every cached inference, and every downstream system the agent interacted with.

Bias Cascading, Accountability Fragmentation, and Third-Party Sharing

The remaining risks compound each other. Biased decisions can cascade through multiple autonomous actions before anyone detects them. Accountability fragments across AI developers, deploying organizations, and users. Personal data gets shared with third parties whose “separate personal data collection and processing practices” the original controller may not even know about.

The EDPS summarizes the human impact bluntly: these systems risk “undermining human dignity and autonomy by reducing individuals to data points” and exerting a “manipulative effect on the person concerned.”

What the EDPS Actually Requires: Three Guidance Documents

The EDPS did not drop a single document. It published three interconnected pieces of guidance between October and November 2025 that together form a compliance framework for AI systems, including agents.

Revised Generative AI Orientations (October 28, 2025)

The revised generative AI guidance updates the EDPS’s original June 2024 orientations. EDPS Wojciech Wiewiórowski stated: “Artificial intelligence is an extension of human ingenuity, and the rules governing it must evolve just as dynamically.”

The key updates include a new compliance checklist for lawfulness assessments and clarified distinctions between controller, joint controller, and processor roles. For agent developers, the controller/processor distinction is the critical piece. In a multi-agent architecture involving a model provider (like OpenAI or Anthropic), a platform layer (like LangChain or CrewAI), an enterprise integrator, and external API services, who is the controller? The answer determines who is liable for data breaches, who must respond to subject access requests, and who faces the fines.

Risk Management Guidance for AI Systems (November 11, 2025)

The 55-page risk management guidance adopts ISO 31000:2018 as its risk management methodology and covers five core technical areas: fairness, accuracy, data minimization, security, and data subject rights.

The strongest statement in the document: interpretability and explainability are called “sine qua non” conditions for GDPR compliance. Not recommended practices. Essential prerequisites. For agents making decisions about people, this means you need to be able to explain not just the final output but the reasoning chain that led to it, including which tools were called, what data was retrieved, and how intermediate results influenced the final decision.

The guidance includes three annexes with metrics for AI evaluation, visual risk overviews, and phase-specific checklists covering both development and procurement scenarios.

Related: AI Agent Privacy in 2026: Why Traditional Governance Breaks When Agents Act Autonomously

Four Runtime Controls Every Agent Developer Needs

The EDPS guidance is intentionally principle-based: it identifies what must be protected without prescribing specific implementations. But the IAPP’s analysis by Keivan Navaie of Lancaster University translated these principles into four concrete engineering controls that map directly to the EDPS requirements.

Purpose Locks and Goal-Change Gates

Treat agent objectives as inspectable first-class objects in your system. When an agent’s scope expands during execution, whether because a user prompt is ambiguous or because a sub-agent requests additional data access, the system must surface the proposed change, check compatibility with the original lawful basis, and either block, request consent, or escalate to a human operator.

This is not optional under the EDPS framework. Autonomous purpose expansion was explicitly flagged as a risk. A purpose lock is the engineering answer.

End-to-End Execution Records

Every agent run needs a durable trace capturing the agent-generated plan, each tool or API call made, the data categories processed at each step, data destinations (including third-party services), and state updates. This is the technical prerequisite for both GDPR Article 30 records of processing activities and the EU AI Act’s automatic logging requirements.

Bitkom’s December 2025 whitepaper on AI agent security found that 86% of agents carried out critical or harmful actions in at least one attack scenario. Without execution records, you cannot even diagnose what went wrong, let alone demonstrate compliance to a regulator.

Tiered Memory Governance

Agent memory must be governed in tiers: ephemeral working context with strict time-to-live limits, and long-term knowledge (profiles, embeddings, learned preferences) that is purpose-scoped and supports deletion as a callable operation. The EDPS warning about “continuous learning from user behaviour and sharing information across multiple AI agents” amplifying data retention risks maps directly to this control.

The Spanish AEPD guidance goes further, recommending memory compartmentalization per processing activity and “golden testing” procedures to catch behavioral drift in agents that learn over time.

Live Controller/Processor Mapping

Build a runtime registry that resolves controller and processor roles per use case, maintains contractual hooks, and tracks cross-border data pathways. In a system where Agent A calls Agent B, which queries an external API hosted in the US, which returns results that Agent C uses to update an EU database, the controller/processor chain needs to be documented and auditable in real time.

How EDPS Guidance Connects to the EU AI Act Deadline

The EDPS explicitly published its guidance “without prejudice to the EU AI Act.” But the timing is not coincidental. The EU AI Act’s high-risk system obligations, including risk management, human oversight, technical documentation, automatic logging, and cybersecurity requirements, become fully enforceable on August 2, 2026.

EDPS Wojciech Wiewiórowski announced at the IAPP Europe Data Protection Congress 2025 that joint guidelines on the GDPR/AI Act interplay are being developed with the EDPB and European Commission. He stated: “Guidelines on the interactions between the EU Data Protection Regulation and the Artificial Intelligence Act should be ready by early next year.”

For agent developers, this means dual compliance is now the default. The GDPR cap of 20 million euros or 4% of global turnover stacks with AI Act fines of up to 35 million euros or 7% of global turnover. A single AI agent processing personal data without proper documentation, impact assessments, and logging faces a combined penalty ceiling of 55 million euros.

The Dutch DPA’s February 2026 warning about open-source AI agents like OpenClaw, where approximately 20% of available plug-ins contained malware, shows that enforcement agencies are already monitoring the agent ecosystem specifically.

Related: EU AI Act 2026: What Companies Need to Do Before August

What Developers Should Do Right Now

The gap between the EDPS publishing its guidance and the EU AI Act deadline is five months. Here is a concrete priority list:

Audit your agent’s data access scope. Map every data source, API, and system your agent can reach. If the access scope is broader than what is documented in your privacy notice and DPIA, either restrict the agent’s permissions or update your documentation.

Implement purpose locks. Add runtime checks that flag when an agent’s processing deviates from its stated purpose. This can be as simple as a whitelist of approved tool calls per task type, or as sophisticated as a classifier that evaluates each step against the original user intent.

Build execution logging from day one. Every tool call, every data access, every inference, logged with timestamps, data categories, and processing purposes. This is the minimum viable compliance artifact for both GDPR Article 30 and AI Act Article 12.

Run a controller/processor mapping exercise. For every external service your agent calls, determine whether you are a controller, joint controller, or processor. Document it. Get processor agreements in place under GDPR Article 28.

Budget for compliance costs. Industry estimates put the compliance overhead for production AI agents handling sensitive data at $8,000-$25,000 per agent, covering encryption, audit logging, PII protection, and data retention policies. This is not an optional line item.

The global enterprise agentic AI market is projected to grow from $3.6 billion in 2024 to nearly $171 billion by 2034, a 47.2% CAGR. The EDPS is betting that regulation now prevents a data protection crisis later. Whether you agree with the approach or not, the enforcement apparatus is real and the deadlines are fixed.

Frequently Asked Questions

What is the EDPS TechSonar report on agentic AI?

The EDPS TechSonar 2025-2026 report, published November 24, 2025, identifies agentic AI as one of six emerging technologies requiring data protection scrutiny. It lists 12 specific risks including autonomous purpose expansion, unpredictable data gathering, bias cascading, and accountability fragmentation across multi-agent systems.

How does the EDPS guidance differ from general GDPR requirements for AI?

The EDPS guidance specifically addresses challenges unique to autonomous AI agents: runtime decision-making about data processing, multi-agent accountability gaps, continuous learning from user behavior, and the inability of static compliance documents to cover emergent processing activities. General GDPR requirements assume predetermined processing chains, which agents break by design.

What fines can AI agent developers face under GDPR and the EU AI Act combined?

GDPR fines cap at 20 million euros or 4% of global annual turnover. The EU AI Act adds fines of up to 35 million euros or 7% of global annual turnover. These stack, creating a combined penalty ceiling of 55 million euros for a single AI agent that violates both frameworks.

Based on the EDPS principles, the four key controls are: (1) purpose locks and goal-change gates that flag scope expansion, (2) end-to-end execution records capturing every tool call and data access, (3) tiered memory governance separating ephemeral context from long-term data, and (4) live controller/processor mapping that tracks data responsibility in real time.

When do the EU AI Act high-risk provisions for AI agents take effect?

The EU AI Act’s high-risk system obligations become fully enforceable on August 2, 2026. This includes requirements for risk management systems, human oversight, technical documentation, automatic logging, and cybersecurity measures. AI agents processing personal data will need to comply with both GDPR and AI Act requirements simultaneously.