Photo by Kindel Media on Pexels Source

The UK’s Information Commissioner’s Office became the first data protection authority in the world to publish a formal risk assessment specifically targeting agentic AI. Their Tech Futures: Agentic AI report, released in March 2025, identifies eight concrete risk areas where autonomous AI agents collide with data protection law. The most significant finding: when Agent A hallucinates an output and passes it to Agent B, which passes it to Agent C, the resulting cascade of errors is nearly impossible to trace, attribute, or correct under existing accountability frameworks.

This matters beyond the UK. The UK GDPR mirrors the EU GDPR almost word for word. The controller/processor framework is identical. If the ICO says agentic AI creates fundamental tensions with data protection principles, those same tensions exist under the EU’s framework, under Germany’s BDSG, and under every national implementation of the GDPR.

Related: GDPR and AI Agents: Data Protection When Machines Make Decisions

What the ICO Actually Found: Eight Risk Areas

The ICO does not treat agentic AI as a single risk category. The report breaks it into eight specific areas where autonomous agents create data protection problems that conventional AI systems do not.

Cascading Errors and Unpredictable Outcomes

This is the risk area the ICO spends the most time on, and for good reason. In a multi-agent system, one agent’s output becomes another agent’s input. If the first agent hallucinates a fact, fabricates a data point, or misinterprets a query, that error does not stay contained. It propagates. The second agent treats the hallucinated output as ground truth and builds on it. The third agent compounds the error further.

The ICO’s specific concern is traceability. Under the UK GDPR (and the EU GDPR), data controllers must be able to explain how personal data was processed and why a particular decision was reached. When errors cascade through three or four agents, each adding its own reasoning layer, reconstructing the chain of causation becomes a forensic exercise. Most organizations lack the logging infrastructure to even attempt it.

This is not a theoretical problem. Microsoft’s Copilot for Security chains multiple specialized agents: one gathers threat intelligence, another correlates logs, a third recommends actions. If the intelligence agent misidentifies a benign IP address as malicious, the correlation agent finds spurious patterns confirming the false positive, and the recommendation agent suggests blocking legitimate traffic. Each agent operated correctly given its inputs. The system-level outcome is still wrong.

Purpose Limitation and Purpose Creep

GDPR Article 5(1)(b) requires that personal data be collected for “specified, explicit and legitimate purposes” and not processed in ways incompatible with those purposes. The ICO flags agentic AI as a direct challenge to this principle.

Why? Because agents adapt. An AI agent deployed for customer support might autonomously decide to access purchase history, credit records, or social media activity to better resolve a complaint. Each access might seem reasonable in isolation. But the original purpose, handling a support ticket, did not include credit scoring or social media profiling. The agent drifted beyond its mandate without anyone noticing.

The ICO’s language is specific: agents with broad goals may “interpret instructions in ways that lead to data processing outside the original scope.” This is not a bug in the agent. It is the agent doing exactly what it was designed to do: pursue a goal adaptively. The conflict is structural.

Related: AI Agent Security: The Governance Gap That 88% of Organizations Already Feel

Controller/Processor Confusion in Multi-Agent Systems

This section contains the ICO’s most novel analysis, and the one that will cause the most headaches for legal teams.

In a traditional software system, identifying the data controller (the entity that determines the purposes and means of processing) and the data processor (the entity that processes data on the controller’s behalf) is relatively straightforward. The company that deploys the system is the controller. The cloud provider hosting it is the processor.

Agentic AI systems break this model. Consider a multi-agent travel booking system: one agent handles flights (provided by Airline API Co.), another handles hotels (Hotel Platform Inc.), a third orchestrates the workflow (Enterprise Deployer Corp.). Each agent processes the traveler’s personal data. But who is the controller?

The ICO’s guidance offers five key positions:

1. The deploying organization is likely the controller, even if it does not understand the agent’s specific reasoning. Delegating decisions to an AI agent does not delegate controllership.

2. AI providers may become joint controllers when they determine the means of processing, such as what data the model was trained on or how it reasons. This is context-dependent, but the implication is clear: OpenAI, Anthropic, and Google could face joint controller obligations for enterprise agent deployments.

3. An agent acting as a processor could “go rogue.” Through autonomous behavior, it might start determining its own processing purposes, effectively becoming a controller without anyone intending it. This creates legal uncertainty that no existing data processing agreement anticipates.

4. Contractual arrangements must account for autonomy. Standard data processing agreements assume the processor follows the controller’s instructions. When the processor is an autonomous agent that can deviate from instructions, the contract needs provisions for what happens when it does.

5. Data flow mapping is essential before deployment. Organizations must map every data flow across every agent and tool in the system before going live, not after a breach.

How This Compares to the EU AI Act and GDPR

The ICO’s report is guidance, not legislation. It interprets existing UK GDPR principles for a new technology. The EU AI Act takes a fundamentally different approach: it classifies AI systems by risk tier and imposes specific obligations on providers and deployers.

Where the EU AI Act classifies entire domains as high-risk (employment, credit scoring, law enforcement, migration), the ICO asks organizations to assess each agent’s specific capabilities. An HR agent that drafts job descriptions has a different risk profile than one that screens and rejects candidates. The ICO’s approach is more granular but less prescriptive.

The EU AI Act also introduces specific provider/deployer obligations that go beyond data protection. Article 6 requires conformity assessments for high-risk systems. Article 9 mandates risk management systems. Article 13 requires transparency documentation. None of these have direct equivalents in the ICO’s guidance, because the ICO only has jurisdiction over data protection, not AI safety broadly.

For DACH companies, this creates a layered compliance picture:

FrameworkScopeApproachStatus
UK GDPR + ICO GuidanceData protection onlyPrinciples-based, voluntary guidancePublished March 2025
EU GDPRData protection onlyLegally binding, enforcement by DPAsIn force since 2018
EU AI ActAI safety + fundamental rightsRisk-tiered, legally bindingHigh-risk provisions enforceable August 2026
BDSG (Germany)National data protection supplementAdds specific rules on automated decisionsIn force

The practical takeaway: if you deploy AI agents that process personal data in the EU, you need to comply with all applicable layers. The ICO’s analysis of how agentic AI challenges data protection principles applies equally under the EU GDPR. The EU AI Act adds obligations on top.

Related: Singapore's Agentic AI Governance Framework: What the First Global Playbook Gets Right

The ICO’s Seven Recommendations (And What They Actually Require)

The report closes with seven recommendations. Some are genuinely useful. Others are more aspirational than actionable.

1. Conduct DPIAs Before Deploying Agentic AI

The ICO considers agentic AI deployments likely to meet the threshold for mandatory Data Protection Impact Assessments under Article 35 of the UK GDPR. This is not optional guidance. If your agent processes personal data autonomously, you almost certainly need a DPIA. The equivalent obligation exists under EU GDPR Article 35, and Germany’s BfDI has published its own list of processing activities requiring DPIAs.

2. Map Data Flows Across All Agents and Tools

Before deployment, document every data flow: what personal data each agent accesses, where it sends data, what tools it calls, and what third-party APIs it uses. In a multi-agent system with ten agents and thirty tool integrations, this is a substantial mapping exercise. But without it, you cannot answer the basic compliance question: “Where does the personal data go?”

3. Define Controller/Processor Relationships Contractually

Standard data processing agreements were not written for autonomous systems. The ICO recommends updating them to account for agent autonomy, including provisions for what happens when an agent acts outside its expected parameters. Given that the ICO also acknowledges agents can inadvertently become controllers, these agreements need to be more than boilerplate.

4. Implement Human Oversight at Meaningful Decision Points

“Meaningful” is the key word. The ICO explicitly warns against rubber-stamp oversight, where a human technically reviews agent decisions but has neither the time, context, nor authority to actually override them. If your “human in the loop” approves 500 agent decisions per hour, that is not oversight.

5. Design for Transparency from the Start

Privacy notices must tell data subjects how their data is being processed. When the system itself cannot predict exactly how it will process data (because agents adapt), the privacy notice becomes a best-guess document. The ICO recommends building explainability into the system architecture, not bolting it on afterward.

6. Apply Data Minimization

Restrict each agent’s access to only the data it needs for its specific task. An agent handling billing inquiries does not need access to the customer’s support history, social media activity, or demographic profile. This is technically straightforward (API permissions, scoped tokens) but organizationally hard when agents share a common data layer.

7. Monitor and Audit Agent Behavior Continuously

Not just at deployment. Continuously. The ICO flags drift, where agent behavior changes over time as the data it operates on shifts, as a specific risk that one-time assessments will miss.

What This Means for Companies Deploying AI Agents Now

The ICO report is voluntary guidance. Nobody gets fined for ignoring it. But it signals where enforcement is heading. When the UK’s data protection regulator says multi-agent systems create “fundamental challenges” for controller/processor identification, expect supervisory authorities across Europe to adopt similar positions.

Three practical steps for any company running AI agents in the EU or UK:

Map your agents today. You cannot comply with data protection requirements you cannot see. Document every agent, every data flow, every tool integration. This is the foundation for everything else.

Update your DPAs. If your data processing agreements with AI providers do not address autonomous agent behavior, they have a gap. Work with legal to add provisions covering agent deviation from instructions, unintended data access, and controller role allocation.

Build audit trails. The cascading error problem is only solvable if you can trace the chain. Log every agent input, every agent output, every tool call, every data access. Store these logs in a way that allows reconstruction of decision chains across multiple agents.

The German BfDI, the French CNIL, and the European Data Protection Board have not yet published equivalent guidance on agentic AI. But the underlying law is the same. The ICO simply said out loud what every data protection officer running AI agents already suspected: existing frameworks were not designed for systems that make autonomous decisions, chain together multiple tools, and process data in ways their deployers cannot fully predict.

Frequently Asked Questions

What is the ICO Tech Futures report on agentic AI?

The ICO Tech Futures: Agentic AI report, published in March 2025, is the first formal assessment by a data protection regulator of how autonomous AI agents challenge existing data protection frameworks. It identifies eight specific risk areas including cascading errors, purpose creep, and controller/processor confusion in multi-agent systems.

Who is the data controller when multiple AI agents process personal data?

According to the ICO, the deploying organization is likely the data controller even if it does not understand the agent’s reasoning. AI providers may be joint controllers if they determine the means of processing. In some cases, an autonomous agent acting as a processor could inadvertently become a controller by determining its own processing purposes.

Does the ICO agentic AI guidance apply to companies in the EU?

The ICO’s guidance directly applies only in the UK. However, the UK GDPR mirrors the EU GDPR almost identically, so the ICO’s analysis of how agentic AI challenges data protection principles applies equally under EU law. The same controller/processor confusion and purpose limitation issues exist under both frameworks.

Do AI agent deployments require a Data Protection Impact Assessment?

The ICO considers agentic AI deployments likely to meet the threshold for mandatory DPIAs under Article 35. This applies under both UK GDPR and EU GDPR. Germany’s BfDI maintains its own list of processing activities requiring DPIAs, and autonomous AI agent processing would likely qualify under most national DPA guidance.

What are cascading hallucinations in multi-agent AI systems?

Cascading hallucinations occur when one AI agent produces a hallucinated or incorrect output that becomes the input for another agent. The second agent treats the error as ground truth and builds on it, amplifying the mistake. In multi-agent chains, these errors propagate through three or more agents, making them extremely difficult to trace or correct.