Twenty-five data protection authorities across Europe are right now contacting companies to ask one question: do your users actually understand what you do with their data? The EDPB launched its 2026 Coordinated Enforcement Framework (CEF) targeting transparency and information obligations under GDPR Articles 12, 13, and 14. This is not a guideline or a recommendation. It is a live, multi-country enforcement action with investigators already reaching out to controllers across sectors.
For AI agent operators, the timing is brutal. Most privacy notices were written for web forms and cookie banners, not for autonomous systems that pull data from APIs, make decisions in milliseconds, and chain together three LLM calls before a human even sees the output. The gap between what your privacy notice says and what your AI agent actually does is exactly what 25 DPAs are about to audit.
What the EDPB Is Actually Investigating
The CEF follows a pattern the EDPB established in previous years. In 2023, DPAs coordinated on the role of Data Protection Officers. In 2024, they targeted the right of access. In 2025, the right to erasure. Each year, the EDPB picks a GDPR obligation, agrees on a shared methodology, and then 20+ national authorities simultaneously investigate controllers in their jurisdiction.
For 2026, the topic is transparency: specifically Articles 12, 13, and 14. These are the provisions that require you to tell people what data you collect, why, who you share it with, how long you keep it, and what rights they have. Article 12 sets the standard for how that information must be presented: concise, transparent, intelligible, easily accessible, and in clear, plain language.
How the Investigation Works
DPAs participating in the CEF use two approaches. Some send formal questionnaires as fact-finding exercises. Others launch full enforcement actions, requesting documentation, auditing systems, and potentially issuing orders or fines. The methodology is coordinated centrally by the EDPB, but each national DPA decides independently how aggressively to pursue it.
The outcomes from national investigations get analyzed both at the national level and across the entire European Economic Area. That means a transparency gap identified by the Irish DPC, the French CNIL, or Germany’s state-level authorities could become the basis for EDPB-wide guidance, effectively setting the enforcement standard for all of Europe.
Why AI Agent Operators Are in the Crosshairs
The CEF does not specifically target AI systems, but it does not need to. Articles 12-14 apply to all controllers processing personal data, and AI agent operators have the widest transparency gaps because their systems are the hardest to describe in plain language.
Consider what Articles 13 and 14 require you to disclose:
The purposes of processing and the legal basis (Article 13(1)(c)). For an AI agent that dynamically determines its own task steps, the purpose can shift mid-execution. Your privacy notice says “customer support optimization.” Your agent just accessed the user’s purchase history, sentiment analysis from a previous call, and a third-party credit score to decide whether to offer a discount.
The recipients of personal data (Article 13(1)(e)). If your agent calls an external LLM API, a search API, and a CRM enrichment service during a single task, each of those is a recipient. Most privacy notices list categories (“service providers”) rather than specifics, which DPAs have already flagged as insufficient.
The existence of automated decision-making, including profiling (Article 13(2)(f)). This requires “meaningful information about the logic involved, as well as the significance and the envisaged consequences.” For an LLM-powered agent, explaining “the logic involved” in a way that is both accurate and “intelligible” is a genuine technical challenge.
The Five Transparency Gaps AI Agent Operators Must Close
Most companies running AI agents have privacy notices that were adequate for their pre-agent architecture. The problem is not that those notices were wrong; it is that AI agents introduced processing activities that the notices do not cover.
Gap 1: Dynamic Data Collection
Traditional applications collect a defined set of data fields. A contact form takes name, email, and message. An AI agent collects whatever it determines is relevant to completing its task. A recruiting agent might start with a resume and end up processing LinkedIn activity, GitHub contribution patterns, and Glassdoor review sentiment. None of that was in the original privacy notice because none of it was predictable.
The fix: Implement runtime logging that records every data source an agent accesses during task execution. Use those logs to maintain a living inventory of processing activities. Update your privacy notice to describe the categories of data your agents can access, not just what they access today. The EDPS TechSonar report specifically flagged this as a risk for agentic systems.
Gap 2: Third-Party Data Sharing via API Calls
Every API call your agent makes to an external service is a data transfer. When your agent sends a customer query to OpenAI’s API, to a search engine, or to an enrichment service, you are sharing personal data with a third party. Article 13(1)(e) requires you to name the recipients or categories of recipients.
The fix: Audit every external API your agents can call. Map which ones receive personal data. Add them to your privacy notice. For LLM providers specifically, check whether the data is used for model training, because that changes the legal basis and the disclosure requirements.
Gap 3: Explaining Automated Decisions
Article 13(2)(f) requires “meaningful information about the logic involved” in automated decision-making. For a rules-based system, that is straightforward. For an LLM-powered agent that generates its reasoning at inference time, “meaningful information” becomes a real problem.
The CJEU clarified in its 2024 ruling that controllers cannot hide behind trade secrets to avoid explaining automated decisions. You need to describe at least the type of data used, its weighting, and the general logic, even if you cannot reveal the exact model architecture.
The fix: Document each agent workflow that produces decisions affecting individuals. For each, write a plain-language explanation of what data it uses, what factors influence the outcome, and what consequences the decision has. This is not a model card or a technical spec; it is a paragraph a non-technical person can read and understand.
Gap 4: Layered Information for Complex Processing
Article 12(1) says information must be “concise” and “easily accessible.” A 15-page privacy notice that buries AI agent processing in paragraph 47 fails this test even if the content is technically accurate. The Article 29 Working Party guidelines on transparency explicitly recommend layered notices: a short first layer with the essentials, and progressive disclosure for details.
The fix: Create a dedicated “AI Processing” section in your privacy notice, ideally accessible from the interface where users interact with your agent. Include a first layer that explains in 2-3 sentences what the agent does with personal data. Link to a detailed layer that covers legal basis, data sources, recipients, retention periods, and decision logic.
Gap 5: Article 14 Obligations for Indirect Data Collection
If your agent collects personal data from sources other than the data subject themselves (scraping public profiles, pulling data from third-party APIs, enriching records from external databases), Article 14 applies instead of Article 13. Article 14 has the same disclosure requirements, plus you must tell the data subject the source of their data and the categories of data collected.
The fix: Identify every scenario where your agent processes data about individuals who did not directly provide it. For each, ensure your privacy notice covers the data source, the categories of data, and the purpose. Article 14(3) gives you a maximum of one month from obtaining the data to inform the data subject, or at the time of first communication if you use the data to contact them.
The EU AI Act Overlap: Double Compliance
The EU AI Act’s transparency provisions take effect on August 2, 2026, just months after the EDPB launched its transparency enforcement action. This is not a coincidence. Article 52 of the AI Act adds transparency obligations on top of GDPR for certain AI systems:
- AI systems that interact with people must disclose that the user is interacting with an AI (unless it is obvious from the circumstances).
- Emotion recognition and biometric categorization systems must inform individuals that the system is operating.
- AI-generated content that could be mistaken for real (deepfakes) must be labeled.
For high-risk AI systems under the AI Act, Article 13 requires instructions for use that enable deployers to understand and use the system appropriately, including information about its performance, limitations, and foreseeable risks. That is a different kind of transparency than GDPR’s, but the two stack. You need both.
What a Dual Audit Looks Like
A DPA conducting a GDPR transparency audit in the first half of 2026 will likely also reference the AI Act’s transparency standards, even before the Act’s August 2 enforcement date. The EDPB’s 2026-2027 work programme explicitly mentions cooperation between data protection and AI regulatory frameworks. Expect auditors who know both sets of rules.
Your Transparency Audit Checklist
Before a DPA contacts you (or after, but with more urgency), run through this:
Map every AI agent’s data flows. Not what you designed, but what actually happens in production. Use runtime logs, not architecture diagrams.
Check your privacy notice against reality. Does it cover every data source your agents access? Every external API they call? Every type of decision they make?
Test the “plain language” standard. Hand your privacy notice to someone outside your engineering team. If they cannot explain what your AI agent does with personal data after reading it, your notice fails Article 12.
Implement layered disclosure. Short summary accessible from the agent interface, detailed notice one click away.
Document automated decision logic. For each agent that makes decisions affecting individuals, maintain a current plain-language explanation.
Check Article 14 scenarios. Any agent that processes data about people who did not provide it directly triggers separate disclosure requirements.
Prepare for the AI Act overlap. Ensure your AI system discloses its artificial nature where required, and that high-risk systems have adequate documentation for deployers.
The EDPB’s coordinated enforcement actions produce public reports. The 2023 CEF report on DPOs led to binding recommendations. The 2025 report on the right to erasure changed how companies handle deletion requests. The 2026 transparency report will set the standard for how companies must explain AI-driven data processing. Better to be ahead of it than in it.
Frequently Asked Questions
What is the EDPB 2026 Coordinated Enforcement Framework about?
The EDPB’s 2026 CEF targets transparency and information obligations under GDPR Articles 12, 13, and 14. Twenty-five national Data Protection Authorities across Europe are conducting coordinated investigations into how organizations inform individuals about personal data processing. DPAs are contacting controllers through questionnaires, fact-finding exercises, and formal enforcement actions.
How does the EDPB transparency enforcement affect AI agent operators?
AI agent operators face specific transparency gaps because their systems process data dynamically, make API calls to third parties, and produce automated decisions in ways that most privacy notices do not describe. The enforcement action requires operators to explain data collection, third-party sharing, and automated decision logic in plain language that users can understand.
What GDPR articles are covered by the 2026 transparency enforcement?
The enforcement covers Articles 12, 13, and 14. Article 12 sets standards for how information must be presented (concise, transparent, plain language). Article 13 covers disclosures when data is collected directly from individuals. Article 14 covers disclosures when data is obtained from other sources, which is common for AI agents that pull data from third-party APIs or public sources.
How do the EDPB enforcement and EU AI Act overlap on transparency?
The EU AI Act’s transparency provisions take effect on August 2, 2026, shortly after the EDPB launched its GDPR transparency enforcement. The AI Act adds requirements like disclosing AI system interaction to users and providing technical documentation for high-risk systems. Companies need to comply with both sets of transparency rules, which stack rather than replace each other.
What should companies do to prepare for a GDPR transparency audit of their AI agents?
Map actual data flows from runtime logs, update privacy notices to cover all data sources and API calls, test notices against the plain language standard with non-technical readers, implement layered disclosure with a short summary accessible from agent interfaces, document automated decision logic for each agent, and check Article 14 compliance for any agent processing data not provided directly by the data subject.
