Your AI agent just rejected a loan application, flagged an employee for underperformance, and deprioritized a customer support ticket based on spending history. Three decisions, three people affected, zero human involvement. Under GDPR Article 22, every single one of those decisions is potentially illegal.

This is not a theoretical problem for 2028. It is happening right now, in production, across thousands of enterprise AI agent deployments. The EDPB’s 2026 coordinated enforcement action targets transparency obligations. Regulators are building the muscle to audit AI systems. And between the GDPR (in force since 2018) and the EU AI Act (high-risk provisions enforceable August 2, 2026), companies deploying AI agents face a dual compliance burden that most are not prepared for.

Related: EU AI Act 2026: What Companies Need to Do Before August

Article 22: The GDPR Rule That Most Agent Teams Ignore

Article 22(1) GDPR gives every person in the EU the right not to be subject to a decision “based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

That sentence was written in 2016, before AI agents existed as a product category. But it applies to them with surgical precision.

What Counts as a “Decision” Under Article 22

The threshold is lower than most engineering teams assume. A “decision with legal effects” includes obvious things like denying credit, rejecting a job application, or terminating an insurance policy. But “similarly significantly affects” is broader. The Article 29 Working Party guidelines (adopted by the EDPB) list examples:

  • Automatic refusal of an online credit application
  • Differential pricing or service availability based on profiling
  • E-recruiting practices without human intervention
  • Automated decisions about social benefits, visa applications, or tax assessments

If your AI agent triages support tickets and consistently deprioritizes certain customer segments, that may qualify. If it scores employee performance and those scores feed into promotion or termination decisions, that almost certainly qualifies.

The Three Exceptions (And Why They Are Narrow)

Article 22 is not an outright ban. It allows purely automated decisions in three cases:

  1. Contractual necessity: The decision is necessary for entering into or performing a contract. Example: an insurance agent that auto-approves low-risk policies based on declared data. But “necessary” is interpreted strictly. If a human could make the decision without significant delay, the exception may not apply.

  2. Legal authorization: EU or member state law explicitly permits it. Germany’s Section 37 BDSG allows automated decisions for certain insurance and credit scoring scenarios, but only with safeguards.

  3. Explicit consent: The data subject has given explicit consent to the automated decision. This means informed, specific, freely given consent, not a buried checkbox in terms of service.

Even when an exception applies, Article 22(3) requires you to implement “suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention, to express his or her point of view and to contest the decision.”

In practice: every AI agent that makes decisions about people needs a human escalation path. No exceptions.

Related: What Are AI Agents? A Practical Guide for Business Leaders

The GDPR and AI Act Overlap: Dual Compliance

The EU AI Act does not replace the GDPR for AI systems. It adds to it. If your AI agent processes personal data and falls into a high-risk category under Annex III of the AI Act, you must comply with both frameworks simultaneously.

The EDPB-EDPS Joint Opinion 1/2026 on the Digital Omnibus AI proposal makes this explicit. The opinion warns against “administrative simplification” that would “lower the protection of fundamental rights.” Specifically, the EDPB flagged concerns about:

  • The proposed expansion of special category data processing (health data, ethnicity) for bias detection without sufficient safeguards
  • The deletion of mandatory registration for AI systems that providers self-assess as “non-high-risk”
  • Postponement of enforcement timelines for high-risk systems

Where the Two Frameworks Collide

RequirementGDPREU AI Act
Legal basis for processingArticle 6 (consent, contract, legitimate interest, etc.)Not required separately, but AI Act assumes lawful data processing
Automated decisions about peopleArticle 22 restricts, requires human intervention rightsArticle 14 requires human oversight for high-risk systems
TransparencyArticles 13-14 (inform data subjects about processing)Article 13 (technical documentation and user information)
Impact assessmentArticle 35 DPIA for high-risk processingArticle 9 risk management system for high-risk AI
Data qualityArticle 5(1)(d) (accuracy principle)Article 10 (data governance for training/testing data)
Cross-border transfersChapter V (adequacy decisions, SCCs, BCRs)No separate transfer rules, defers to GDPR

The practical problem: your DPIA under GDPR Article 35 and your risk management system under AI Act Article 9 cover overlapping but not identical ground. Most companies will need to run both processes, though the EDPB recommends integrating them where possible.

DPIAs for AI Agents: What Regulators Actually Expect

Article 35 GDPR mandates a Data Protection Impact Assessment when processing is “likely to result in a high risk to the rights and freedoms of natural persons.” AI agents that make decisions about people trigger this requirement almost by default.

The CNIL’s DPIA guidelines specify that you must conduct a DPIA when your processing involves at least two of the following criteria:

  • Evaluation or scoring (including profiling and predicting)
  • Automated decision-making with legal or similar significant effects
  • Systematic monitoring
  • Processing of sensitive data or data of a highly personal nature
  • Data processed on a large scale
  • Innovative use of new technological or organizational solutions

An AI agent doing customer segmentation hits at least three of those. One handling HR decisions hits four or five.

What a DPIA for an AI Agent Must Cover

A DPIA is not a checkbox exercise. Under Article 35(7), it must contain:

  1. A systematic description of the processing: What data does the agent access? What decisions does it make? What model powers it? Where is the data sent? If you use an external LLM provider (OpenAI, Anthropic, Google), the data flow to their servers is part of the processing description.

  2. Assessment of necessity and proportionality: Why does the agent need this data? Could the same outcome be achieved with less data or less automated processing? This is where “we want to automate everything” meets “you have to justify why.”

  3. Assessment of risks to data subjects: What happens if the agent makes a wrong decision? What if it leaks personal data? What if it develops bias against a protected group? Quantify the likelihood and severity.

  4. Measures to address those risks: Human-in-the-loop checkpoints, data minimization, access controls, bias testing, model monitoring, incident response procedures.

The Baden-Wuerttemberg DPA guidance specifically notes that DPIAs for AI systems must address the opacity of model decisions and the potential for discriminatory outcomes.

Related: AI in Recruiting: What Is Actually Legal Under the EU AI Act?

Cross-Border Data Transfers: The LLM Provider Problem

Most AI agents run on LLMs hosted by US companies. When your agent sends a customer complaint, an employee record, or a loan application to OpenAI’s API, that personal data crosses the Atlantic. Chapter V of the GDPR governs these transfers, and the rules are strict.

The Current Transfer Framework

The EU-US Data Privacy Framework (DPF), adopted in July 2023, provides adequacy for transfers to US companies that self-certify under the framework. OpenAI, Google, Microsoft, and Anthropic are all certified.

But the DPF is not a blank check. The EDPB’s adequacy opinion flagged remaining concerns about US surveillance practices, and legal challenges are expected. If the DPF is invalidated (as happened with Privacy Shield in the Schrems II ruling), every AI agent deployment relying on US-based LLMs would need alternative transfer mechanisms overnight.

Practical Steps for Agent Deployments

Inventory your data flows. For each AI agent, document: what personal data it sends to external services, which provider processes it, where the processing occurs, and under what transfer mechanism.

Do not rely solely on the DPF. Use Standard Contractual Clauses (SCCs) as a backup. Most major LLM providers already offer them in their data processing agreements.

Evaluate European alternatives. Aleph Alpha (German), Mistral (French), and several European cloud providers offer LLM hosting within EU borders. For agents that process sensitive personal data (health, financial, HR), keeping the data in the EU removes the transfer risk entirely.

Anonymize before sending. Where possible, strip personal data before it reaches the LLM. If an agent needs to summarize a customer complaint, does the LLM need the customer’s name, address, and account number? In most cases, the answer is no.

Related: AI Agent Identity: Why Every Agent Needs IAM Before Touching Production

The Betriebsrat Factor: German Works Councils and AI Agents

For companies operating in Germany, there is an additional compliance layer that no other EU country requires: Betriebsrat (works council) co-determination.

Under Section 87(1)(6) of the Betriebsverfassungsgesetz (Works Constitution Act), the works council has a co-determination right on the “introduction and use of technical facilities designed to monitor the behavior or performance of employees.” AI agents that touch employee data, whether for performance reviews, shift scheduling, task allocation, or internal communication monitoring, fall squarely within this provision.

The Federal Labour Court has interpreted “monitoring” broadly. Even if the AI agent’s primary purpose is not surveillance, if it can technically be used to monitor employee behavior (and most agents processing employee data can), the works council must be consulted.

What This Means in Practice

  • You cannot deploy an AI agent that processes employee data without a Betriebsvereinbarung (works council agreement)
  • The works council can demand access to the agent’s logic, the data it processes, and the decisions it makes
  • Violations of co-determination rights can make the entire deployment unlawful, regardless of GDPR compliance
  • The agreement typically needs to cover: purpose limitation, data categories processed, retention periods, human oversight procedures, and employee rights to challenge decisions

German companies have learned this the hard way. Forrester predicts that specialized AI governance functions will become standard in German enterprises by late 2026, partly driven by works council demands for structured oversight.

A Practical Compliance Checklist for AI Agent Deployments

Here is what a legally defensible AI agent deployment looks like under the current GDPR framework:

Before deployment:

  • Identify the legal basis for each type of personal data processing (Article 6)
  • Conduct a DPIA (Article 35) for any agent that makes decisions about people
  • Implement human-in-the-loop for decisions with legal or significant effects (Article 22)
  • Map all cross-border data transfers and ensure adequate safeguards (Chapter V)
  • In Germany: obtain Betriebsvereinbarung for employee-facing agents
  • Document the agent’s decision-making logic for transparency obligations (Articles 13-14)

During operation:

  • Log all decisions that affect individuals, including the data inputs and the outcome
  • Provide data subjects with a clear way to request human review of automated decisions
  • Monitor for bias and discriminatory patterns in agent outputs
  • Review and update the DPIA whenever the agent’s model, data sources, or scope changes
  • Respond to data subject access requests (Article 15) that cover agent-processed data within 30 days

Incident response:

  • If the agent leaks personal data, you have 72 hours to notify your supervisory authority (Article 33)
  • If the agent makes a systematically biased decision affecting a group, that may constitute a data breach requiring notification
  • Document everything. Regulators will ask for records.

GDPR fines reached EUR 2.3 billion in 2025 alone, a 38% increase over the previous year. Over EUR 6.2 billion has been issued since 2018. The trend is clear, and AI systems are increasingly in the crosshairs.

Frequently Asked Questions

Does GDPR Article 22 apply to AI agents?

Yes. GDPR Article 22 applies to any automated processing that produces legal effects or similarly significantly affects a person. AI agents that make decisions about people, such as approving loans, screening job applicants, or scoring customer risk, fall directly under this provision. The agent must either have a valid legal basis for the automated decision or include meaningful human oversight.

Do I need a DPIA for every AI agent?

Not necessarily for every agent, but for any agent that processes personal data and makes decisions about individuals. Article 35 GDPR requires a DPIA when processing is likely to result in high risk to data subjects. AI agents that do profiling, automated decision-making, or process sensitive data almost always meet this threshold.

Can I use US-based LLMs like OpenAI or Anthropic under GDPR?

Currently yes, under the EU-US Data Privacy Framework adopted in July 2023. Both OpenAI and Anthropic are DPF-certified. However, the framework may face legal challenges similar to Schrems II. Companies should use Standard Contractual Clauses as a backup and consider European LLM alternatives for sensitive data processing.

How do GDPR and the EU AI Act overlap for AI agents?

The EU AI Act does not replace the GDPR. If your AI agent processes personal data and qualifies as a high-risk system under the AI Act, you must comply with both frameworks. This means running both a GDPR DPIA and an AI Act risk management process, meeting transparency requirements from both regulations, and ensuring human oversight under both Article 22 GDPR and Article 14 of the AI Act.

Does the German Betriebsrat have a say in AI agent deployments?

Yes. Under Section 87(1)(6) of the German Works Constitution Act, the works council has co-determination rights over technical systems that can monitor employee behavior or performance. AI agents processing employee data require a Betriebsvereinbarung (works council agreement) before deployment. The works council can demand access to the agent’s logic and decision criteria.

Source