Photo by Scott Graham on Unsplash (free license) Source

On August 2, 2026, the penalty ceiling for deploying an AI agent without proper documentation jumps from €20 million to €55 million. That is not a typo. The EU AI Act’s high-risk provisions stack on top of existing GDPR fines, creating a dual enforcement regime where a single AI agent processing personal data can trigger violations under both frameworks simultaneously. The GDPR caps fines at €20 million or 4% of global turnover. The AI Act adds up to €35 million or 7% of global turnover. Both apply. Neither replaces the other.

The specific obligation that most companies are missing: any AI agent that processes personal data in a way that presents high risk to individuals now requires not one but two formal assessments before deployment. A Data Protection Impact Assessment (DPIA) under GDPR Article 35, and a Fundamental Rights Impact Assessment (FRIA) under AI Act Article 27. Five months from now, deploying without both is a compliance violation with real enforcement teeth.

Related: EU AI Act 2026: What Companies Need to Do Before August

The DPIA Is No Longer Optional for AI Agents

GDPR Article 35 has required DPIAs since 2018 for processing that is “likely to result in a high risk to the rights and freedoms of natural persons.” For years, many companies treated this as a checkbox exercise, or skipped it entirely for internal tools. AI agents make that approach untenable.

The CNIL (France’s data protection authority) stated explicitly that for high-risk AI systems involving personal data, conducting a DPIA is “in principle necessary.” Not recommended. Necessary. Germany’s BfDI maintains a blacklist of processing operations that always require a DPIA. Profiling, automated decision-making, and systematic monitoring of public spaces are all on it. An AI agent that scores job candidates, triages customer complaints by sentiment, or routes insurance claims based on risk profiles hits multiple triggers simultaneously.

Why AI Agents Hit Every Trigger

Article 35(3) lists three scenarios where a DPIA is always required:

  1. Systematic and extensive profiling with significant effects: An AI agent evaluating employee performance, scoring loan applications, or ranking sales leads based on behavioral data qualifies here.

  2. Large-scale processing of special category data: Any agent handling health records, biometric data, or data revealing racial or ethnic origin triggers this automatically.

  3. Systematic monitoring of publicly accessible areas: AI agents scraping public web data, monitoring social media, or analyzing publicly posted reviews fall under this provision.

The Article 29 Working Party guidelines (adopted by the EDPB) add a practical test: if a processing operation meets two or more criteria from their nine-item list (which includes “innovative use of technology,” “evaluation or scoring,” and “data concerning vulnerable subjects”), a DPIA is required. AI agents typically meet four or five criteria on that list.

The bottom line: it is difficult to construct a scenario where an AI agent processes personal data and a DPIA is not required. The legal presumption has effectively flipped from “assess whether you need one” to “document why you think you don’t.”

Related: GDPR and AI Agents: Data Protection When Machines Make Decisions

The FRIA: A Second Assessment Most Companies Have Not Heard Of

The EU AI Act Article 27 introduces the Fundamental Rights Impact Assessment, a requirement with no precedent in European data protection law. Unlike the DPIA, which focuses on data processing risks, the FRIA evaluates broader impacts on fundamental rights: non-discrimination, freedom of expression, human dignity, and due process.

Who Must Conduct a FRIA

The obligation applies to two categories of deployers using Annex III high-risk AI systems:

  • Public bodies and entities providing public services: Government agencies, public hospitals, state universities, utilities, and any private company performing public functions (outsourced welfare processing, public transport, municipal services).

  • Deployers in credit and insurance: Any entity using AI systems to evaluate creditworthiness, establish credit scores, or perform risk assessment and pricing for life and health insurance.

That second category is broader than it appears. A fintech company using an AI agent to pre-screen loan applications, an insurer using one to calculate policy premiums, or a bank using one to flag suspicious transactions, all fall within scope.

What the FRIA Requires

According to Freshfields’ analysis of Article 27, deployers must document:

  • A description of the deployer’s processes in which the AI system will be used
  • The period of time and frequency of use
  • The categories of persons and groups likely to be affected
  • The specific risks of harm to the identified categories
  • A description of human oversight measures
  • The measures to be taken if risks materialize, including internal governance and complaint mechanisms

The FRIA must be completed before first use of the high-risk system. It must be updated whenever relevant factors change. Results must be notified to the competent market surveillance authority. The AI Office will provide a template, but waiting for that template is not a compliance strategy: the deadline does not shift because the template arrives late.

DPIA + FRIA: One Assessment or Two?

The AI Act explicitly allows combining both assessments into a single document. Recital 96 of the AI Act states that where a DPIA has already been conducted, the FRIA shall complement it. In practice, this means: start with your GDPR DPIA, then extend it to cover the fundamental rights dimensions the FRIA requires.

This is the only rational approach. Running two separate assessments for the same AI system doubles the documentation burden without improving protection. The combined assessment should address data processing risks (DPIA scope) and broader rights impacts (FRIA scope) in a single structured document.

Related: Shadow AI Agents: The Governance Crisis Hiding in Plain Sight

Conducting a Combined DPIA + FRIA for AI Agents: A Practical Framework

Most DPIA templates were designed for traditional data processing: a database, a defined set of fields, a predictable data flow. AI agents break that model. An agent might call an API it was not originally configured to use, process data in ways its developers did not anticipate, or chain decisions across multiple systems in a single execution. Your assessment needs to account for this dynamic behavior.

Step 1: Map the Agent’s Data Flows and Decision Paths

Document every data source the agent can access, every external API it can call, and every action it can take. For each decision the agent makes, identify: what personal data feeds into that decision, what categories of people are affected, and what the consequences of an incorrect decision would be.

This is harder than it sounds. Unlike traditional software where data flows are deterministic, an AI agent’s behavior can vary based on its prompt, the model’s reasoning, and the tools available to it. Your mapping must cover the envelope of possible behaviors, not just the intended ones. IAPP’s analysis of agentic AI compliance highlights this exact problem: “When an AI agent rewrites its plan mid-run and calls an API that never made it into your data protection impact assessment, static controls collapse.”

Step 2: Assess Risks Against Both Frameworks

For the DPIA component, evaluate risks to data subjects: unauthorized access, inaccurate profiling, discriminatory outcomes, lack of transparency, inability to exercise data rights (access, rectification, erasure).

For the FRIA component, evaluate broader fundamental rights impacts: does the agent’s operation risk discriminating against protected groups? Could it restrict access to essential services? Does it affect people’s ability to seek redress? Does it create power imbalances between the deployer and affected individuals?

Step 3: Define Mitigation Measures and Human Oversight

For each identified risk, document specific mitigations. “We will monitor the system” is not a mitigation. “A human reviewer approves every agent decision that would result in denial of service, with a maximum response time of 48 hours and an escalation path for contested decisions” is a mitigation.

The EU AI Act requires meaningful human oversight for high-risk systems, not just a human somewhere in the loop who rubber-stamps outputs. The assessment must demonstrate that human oversight is designed to catch the specific risks you identified, not just check a box.

Step 4: Document, Notify, and Schedule Reviews

File the combined assessment with your Data Protection Officer. For the FRIA component, notify the market surveillance authority. Set a review schedule: at minimum annually, and whenever the agent’s capabilities, data sources, or deployment context change. If your agent gets a new tool, a new data source, or a new use case, the assessment needs an update.

The Enforcement Reality: Why This Matters Now

Italy’s Garante imposed a €15 million fine on OpenAI for processing personal data without a legal basis or adequate transparency. Spain’s AEPD fined a university €750,000 for using AI-driven facial recognition on exam students. These are GDPR-only fines under the current regime. After August 2, 2026, similar violations involving high-risk AI systems face penalties under both frameworks.

The arithmetic is straightforward. A company deploying an AI recruiting agent without a DPIA faces up to €20 million under GDPR Article 83(4). If that same agent qualifies as high-risk under Annex III (which recruiting tools do, under point 4), deploying without a FRIA adds up to €15 million under AI Act Article 99. Deploying a high-risk system without conformity assessment: up to €35 million. These fines compound. They do not merge.

In Germany, the situation carries an additional wrinkle: as of March 2026, the national supervisory authority for the AI Act has not been formally designated. The Bundesnetzagentur (Federal Network Agency) is the leading candidate, but until the designation is finalized, enforcement responsibilities remain ambiguous. That ambiguity is not protection. It means that when enforcement begins, there will be a backlog of violations accumulated during the gap period.

Companies that complete their assessments now, before August 2, have a documented good-faith compliance posture. Companies that wait are betting that their AI agents will not attract regulatory attention during the most scrutinized compliance deadline since GDPR enforcement began.

Related: AI Agent Privacy in 2026: Why Traditional Governance Breaks When Agents Act Autonomously

Frequently Asked Questions

Is a DPIA mandatory for all AI agents under GDPR?

Practically yes. GDPR Article 35 requires a DPIA whenever processing is likely to result in high risk to individuals. AI agents that process personal data typically trigger multiple high-risk criteria simultaneously: profiling, automated decision-making, innovative technology use, and large-scale processing. The CNIL has stated that DPIAs are “in principle necessary” for high-risk AI systems involving personal data.

What is the difference between a DPIA and a FRIA under the EU AI Act?

A DPIA (Data Protection Impact Assessment) under GDPR Article 35 evaluates risks to personal data and individual privacy. A FRIA (Fundamental Rights Impact Assessment) under EU AI Act Article 27 evaluates broader impacts on fundamental rights including non-discrimination, freedom of expression, human dignity, and due process. Both can be combined into a single assessment document.

Can GDPR and EU AI Act fines stack for the same AI agent?

Yes. The EU AI Act does not replace the GDPR. A single AI agent can violate both frameworks simultaneously. GDPR fines reach up to €20 million or 4% of global turnover. AI Act fines reach up to €35 million or 7% of global turnover. Both apply independently, creating a combined theoretical ceiling of €55 million for a single system.

Who must conduct a Fundamental Rights Impact Assessment?

Under AI Act Article 27, FRIAs are mandatory for deployers that are public bodies or provide public services, and for private deployers using AI for creditworthiness evaluation, credit scoring, or risk assessment and pricing in life and health insurance. The assessment must be completed before first use of the high-risk AI system.

When is the deadline for AI Act high-risk compliance?

August 2, 2026. By this date, deployers of Annex III high-risk AI systems must have completed conformity assessments, technical documentation, FRIA where applicable, and EU database registration. Given that proper assessments take 3 to 6 months to prepare, organizations should start immediately if they have not already.