On August 2, 2026, the EU AI Act stops being theoretical. That is the date when obligations for high-risk AI systems, transparency rules, and the full enforcement framework become law across all 27 member states. If your company builds, deploys, or even just uses AI systems in the EU, this deadline applies to you.
The fines are not symbolic. We are talking up to EUR 35 million or 7% of global annual turnover, whichever is higher. That makes the EU AI Act penalties steeper than GDPR. And unlike GDPR, which mostly targeted data handlers, the AI Act covers providers, deployers, importers, and distributors.
Here is what you actually need to do.
How the EU AI Act Classifies Risk
The entire regulation is built on a four-tier risk framework. Your obligations depend on where your AI system falls.
Unacceptable risk AI systems are banned outright. These include social scoring by governments, real-time biometric identification in public spaces (with narrow exceptions for law enforcement), and AI that exploits vulnerabilities of specific groups. These prohibitions have been in effect since February 2, 2025.
High-risk AI systems face the heaviest compliance burden. This is where most of the August 2026 deadline action happens. Annex III of the AI Act lists eight domains:
- Biometrics: Remote identification, emotion recognition
- Critical infrastructure: AI managing energy grids, transport, water supply
- Education: Systems that decide admissions, grade exams, monitor test-takers
- Employment: CV screening, interview evaluation, performance monitoring, promotion decisions
- Essential services: Credit scoring, insurance pricing, social benefit eligibility
- Law enforcement: Crime prediction, evidence evaluation, risk assessment
- Migration and border control: Visa processing, asylum claim assessment
- Justice and democracy: AI used in judicial decision-making
If your AI system touches any of these areas, it is almost certainly high-risk.
Limited risk systems (like chatbots) face transparency obligations: users must know they are interacting with AI. Minimal risk systems (spam filters, recommendation engines) are mostly unregulated.
What High-Risk Compliance Actually Requires
This is where vague regulatory language meets operational reality. Article 16 of the AI Act lists the obligations for providers of high-risk AI systems. Here is what they mean in practice.
Risk Management System (Article 9)
You need a documented, continuously updated risk management system that runs throughout the entire lifecycle of your AI system. Not a one-time risk assessment. A living process.
This means:
- Identifying and analyzing risks to health, safety, and fundamental rights
- Estimating the likelihood and severity of each risk
- Implementing mitigation measures (technical redesign, human oversight, usage restrictions)
- Testing against defined metrics and probabilistic thresholds before deployment
- Re-evaluating after any significant system update
The risk management system must account for foreseeable misuse, not just intended use. If your hiring AI could theoretically be repurposed for surveillance, that risk needs documentation.
Data Governance (Article 10)
Training, validation, and testing datasets must meet specific quality standards. You need to demonstrate that your data is relevant, representative, and as free from bias as feasible. Dataset documentation must include information about data collection methods, preprocessing, assumptions, and known limitations.
For systems in employment or education, this is particularly rigorous. If your AI screens resumes, you need to prove the training data does not systematically disadvantage protected groups.
Technical Documentation (Article 11)
Every high-risk AI system needs comprehensive technical documentation that allows authorities to assess compliance. This includes system architecture, development methodology, training procedures, performance metrics, and the risk management documentation from Article 9.
Think of it as building a complete audit trail from conception to deployment.
Human Oversight (Article 14)
High-risk AI systems must be designed so humans can effectively oversee them. This is not a checkbox. The regulation requires that the human overseer can understand the system’s capabilities and limitations, interpret its outputs correctly, and override or interrupt the system at any time.
For AI systems making employment decisions, this means a qualified human must review any automated recommendation before it becomes final. The “human-in-the-loop” is not optional; it is legally required.
Conformity Assessment
Before placing a high-risk AI system on the EU market, providers must complete a conformity assessment. For most Annex III systems, this is an internal assessment (Annex VI): you verify your own compliance against all requirements. No third-party auditor needed, but you need to document everything.
The exception: biometric identification systems require third-party assessment by a notified body. Same for AI systems embedded in products already covered by EU harmonization legislation (medical devices, machinery, toys), where the existing conformity assessment under that product legislation applies.
After passing conformity assessment, you affix the CE marking and register the system in the EU database for high-risk AI systems.
The Enforcement Reality: Who Watches the Watchers
Each EU member state must designate national competent authorities for market surveillance by August 2, 2025. In Germany, the Bundesnetzagentur (Federal Network Agency) has taken the lead role and is building a dedicated AI market surveillance unit.
At the EU level, the European AI Office, established within the European Commission, coordinates enforcement and handles General-Purpose AI Model (GPAI) oversight directly.
Fine Structure
The penalty tiers are:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | EUR 35M or 7% of global turnover |
| High-risk system obligations | EUR 15M or 3% of global turnover |
| Incorrect information to authorities | EUR 7.5M or 1% of global turnover |
For SMEs and startups, fines are capped at the lower of the two options (fixed amount or percentage). But “proportional” does not mean “trivial.” A company with EUR 50 million in revenue faces a potential EUR 1.5 million fine for non-compliance with high-risk requirements.
Your Six-Step Compliance Roadmap
Based on guidance from Orrick and the European Commission’s AI Act Service Desk, here is a practical action plan.
Step 1: Build Your AI Inventory (Do This Now)
Map every AI system your company builds, buys, imports, distributes, or deploys in the EU. Include third-party AI embedded in products you sell. For each system, document:
- What it does and who uses it
- What data it processes
- Where it is deployed (which EU markets)
- Who the provider is (you, a vendor, or both)
Many companies discover they have 3-5x more AI systems than they thought. That marketing tool with “smart recommendations”? Probably AI. The chatbot on your website? AI. The fraud detection in your payment system? Definitely AI.
Step 2: Classify Your Role
For each AI system, determine whether you are a provider (you developed it or put your name on it), a deployer (you use it in your operations), an importer (you bring non-EU AI into the market), or a distributor (you make it available to others).
Your obligations differ significantly by role. Providers carry the heaviest burden. Deployers still have real obligations: ensuring human oversight, monitoring for risks, keeping logs, and reporting serious incidents.
Step 3: Risk-Classify Each System
Run each AI system through the Annex III categories. Be honest. A system that “assists” hiring decisions is still high-risk if it meaningfully influences outcomes. The Commission’s guidelines on Article 6, published in February 2026, provide practical examples.
Step 4: Gap Analysis Against Requirements
For every high-risk system, compare your current state against the requirements: risk management, data governance, documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. The gap between “what we have” and “what we need” is your compliance work plan.
Step 5: Implement Governance Framework
Establish an AI governance structure. This does not require a new department. It requires clear ownership: who is responsible for AI risk decisions, who signs off on conformity assessments, who handles incident reporting, and who keeps documentation current.
The IHK Munich recommends integrating AI governance into existing compliance structures rather than building parallel systems.
Step 6: Prepare for Ongoing Compliance
The AI Act is not a one-time certification. Post-market monitoring is mandatory. You must track system performance, report serious incidents to national authorities within specified timeframes, and re-run conformity assessments after substantial modifications.
Set up monitoring dashboards, incident response procedures, and a regular review cycle. Budget for it. This is not a project with an end date; it is a permanent operational cost.
What About General-Purpose AI Models?
If you build or deploy general-purpose AI models (think: foundation models like GPT-4, Claude, or Gemini), a separate set of rules under Articles 51-56 applies. These are enforced by the EU AI Office directly, not national authorities.
GPAI obligations include technical documentation, copyright compliance policies, and transparency about training data. Models classified as having “systemic risk” (roughly: models trained with more than 10^25 FLOPs) face additional requirements including adversarial testing and incident monitoring.
For most companies, this matters because you are deploying GPAI models through APIs, not building them. Your risk is indirect: ensure your AI vendor is compliant, and build your own system-level compliance (human oversight, transparency, documentation) on top.
The Standards Gap
Here is the uncomfortable truth: the European standardization bodies (CEN and CENELEC) were supposed to deliver harmonized technical standards by fall 2025. They missed the deadline. Current estimates put delivery at late 2026, months after enforcement begins.
This means companies must comply with the law before the detailed technical standards exist. The practical approach: use the Act’s requirements directly, supplement with existing frameworks (ISO 42001 for AI management systems, ISO 23894 for AI risk management), and document your reasoning.
When harmonized standards eventually arrive, you may need to adjust. But demonstrating good-faith compliance effort now is far better than waiting.
Frequently Asked Questions
When does the EU AI Act take full effect?
The EU AI Act rolls out in phases. Prohibitions on unacceptable-risk AI took effect February 2, 2025. The major enforcement date is August 2, 2026, when obligations for high-risk AI systems (Annex III), transparency requirements, and the market surveillance framework become enforceable. High-risk AI embedded in products covered by existing EU harmonization legislation has until August 2027.
What are the fines for EU AI Act non-compliance?
Fines can reach EUR 35 million or 7% of global annual turnover for prohibited AI practices, EUR 15 million or 3% for high-risk system violations, and EUR 7.5 million or 1% for providing incorrect information to authorities. SMEs and startups face proportionally lower caps.
Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act applies to any company that places AI systems on the EU market or whose AI system outputs are used in the EU, regardless of where the company is headquartered. This extraterritorial scope is similar to GDPR.
What counts as a high-risk AI system under the EU AI Act?
High-risk AI systems are those used in eight specific domains listed in Annex III: biometrics, critical infrastructure, education, employment, essential services (credit scoring, insurance), law enforcement, migration and border control, and justice and democracy. AI systems embedded in products covered by existing EU safety legislation are also high-risk.
Do I need a third-party audit for EU AI Act compliance?
Most high-risk AI systems listed in Annex III can undergo internal conformity assessment (Annex VI), meaning you self-certify compliance. Third-party assessment by a notified body is required only for biometric identification systems and AI embedded in products already subject to third-party conformity assessment under existing EU legislation.
The EU AI Act is the most significant AI regulation globally. We track compliance developments and their impact on AI deployment. Subscribe to stay informed.
