Germany now has its own AI law. On February 11, 2026, the Federal Cabinet approved the KI-MIG (KI-Marktüberwachungs- und Innovationsförderungsgesetz, or AI Market Surveillance and Innovation Promotion Act), the national law that translates the EU AI Act into enforceable German regulation. The Bundesnetzagentur, the same agency that oversees telecoms and the Digital Services Act, becomes Germany’s central AI regulator. BaFin handles financial-sector AI. The Bundeskartellamt gets a seat at the table for competition-related concerns.

For companies operating in Germany, this is no longer abstract Brussels policy. The KI-MIG assigns real regulators with real enforcement powers, establishes a new KI-Service-Desk for compliance questions, and creates regulatory sandboxes where companies can test AI systems under supervised conditions. The law still needs Bundestag and Bundesrat approval, but the Cabinet draft sets the architecture.

Here is what actually changes and what you need to do about it.

Related: EU AI Act 2026: What Companies Need to Do Before August

Who Regulates What: The German AI Authority Map

The KI-MIG does not create a new mega-agency. Instead, it distributes AI oversight across existing regulators, with the Bundesnetzagentur (BNetzA) acting as the hub. Federal Digital Minister Dr. Karsten Wildberger put it bluntly: “We build no additional bureaucratic agency but leverage existing structures.”

Bundesnetzagentur: The Central Coordinator

The BNetzA takes on four roles simultaneously:

  • Market surveillance authority for AI systems where no sector-specific regulator exists
  • Notifying body that accredits conformity assessment organizations
  • Central contact point for the European Commission and the EU AI Office
  • Complaints office where anyone can report non-compliant AI systems

In practice, this means the BNetzA covers the broadest territory: biometric systems, critical infrastructure AI, AI in education, workplace AI, public services, and law enforcement applications. If your AI system does not fall neatly into another regulator’s domain, the BNetzA is your primary authority.

The agency is not starting from scratch. It already supervises Germany’s Digital Services Act enforcement for platforms like Meta, YouTube, and TikTok. The AI team builds on that infrastructure.

BaFin: Financial Sector AI

The Federal Financial Supervisory Authority (BaFin) becomes the market surveillance authority for high-risk AI systems directly tied to regulated financial activities. Credit scoring algorithms, insurance pricing models, fraud detection systems, and algorithmic trading all fall under BaFin’s watch.

This matters because financial AI was already subject to existing BaFin oversight through MaRisk (Minimum Requirements for Risk Management) and BAIT (Banking Supervisory Requirements for IT). The KI-MIG adds the EU AI Act’s specific requirements on top, but companies in the financial sector deal with a regulator that already understands their technology stack.

KoKIVO: The Coordination Hub

The KI-MIG establishes the KoKIVO (Koordinierungs- und Kompetenzzentrum für die KI-Regulierung), a Coordination and Competence Center for AI Regulation housed within the Bundesnetzagentur. This center coordinates between all AI-related government activities and provides sector-specific guidance.

The KoKIVO also runs the KI-Service-Desk, a first point of contact where companies, particularly SMEs and startups, can ask compliance questions and get practical guidance. Think of it as a regulatory helpline, not an enforcement unit.

Existing Sector Regulators Keep Their Domains

For AI embedded in products already covered by EU harmonization legislation, the existing sector-specific authorities remain responsible. Medical device AI stays with the relevant health authority. Machinery AI stays with product safety regulators. Radio equipment AI stays with the telecommunications authority. The KI-MIG deliberately avoids creating jurisdictional conflicts by respecting established regulatory lanes.

Key Deadlines: What Happens When

The EU AI Act rolls out in phases, and the KI-MIG follows the same timeline. Here is the sequence that matters for companies in Germany:

DateWhat Happens
February 2, 2025Prohibitions on unacceptable-risk AI practices already in effect
July 2025KI-Service-Desk launched; AI literacy guidance published
August 2, 2026High-risk AI obligations enforceable; transparency rules for all AI systems kick in
August 2, 2027Extended deadline for high-risk AI in products covered by existing EU harmonization legislation

The August 2, 2026 date is the one most companies need to circle. From that day, every high-risk AI system deployed in Germany must comply with the full set of EU AI Act requirements: risk management systems, data governance, technical documentation, human oversight, accuracy and robustness standards, and cybersecurity measures.

Germany missed the EU’s original deadline of August 2, 2025 for designating national authorities, which is why the KI-MIG is moving through parliament on an accelerated timeline. The law still requires Bundestag and Bundesrat approval, but opposition is minimal given the bipartisan consensus on innovation-friendly regulation.

Related: Singapore's Agentic AI Governance Framework: What the First Global Playbook Gets Right

The Innovation Side: Regulatory Sandboxes and Real-World Labs

The “I” in KI-MIG stands for Innovation (Innovationsförderung), and the law includes concrete measures that go beyond compliance mandates.

KI-Reallabore (AI Real-World Labs)

The Bundesnetzagentur will establish regulatory sandboxes where companies can test innovative AI applications under simplified conditions. The rules:

  • Low barriers to entry
  • Digitized application procedures
  • Binding feedback within 30 days on whether your AI application complies with the regulation

That last point is significant. Instead of spending months guessing whether your AI system qualifies as high-risk, you can get a definitive regulatory answer within a month. For startups iterating quickly, this removes a major source of uncertainty.

By August 2026, Germany must have at least one operational AI regulatory sandbox. The EU AI Act requires this across all member states, but Germany’s KI-MIG goes further by specifying the operational parameters.

SME and Startup Support

Beyond sandboxes, the KI-MIG mandates targeted training and networking for small and medium enterprises. The KI-Service-Desk specifically caters to companies that lack in-house regulatory expertise. This is not a minor detail: Germany’s Mittelstand accounts for 99% of all companies and many are deploying AI without dedicated compliance teams.

Related: Skills Shortage and AI Agents: Why Germany's Missing Workers Is Not Just a Tech Problem

Penalties: The Numbers That Matter

The KI-MIG adopts the EU AI Act’s penalty framework directly. There are no German-specific discounts.

Violation TypeMaximum Fine
Deploying prohibited AI practices (social scoring, manipulative systems)EUR 35 million or 7% of global annual turnover
Non-compliance with high-risk AI obligationsEUR 15 million or 3% of global annual turnover
Providing incorrect information to regulatorsEUR 7.5 million or 1% of global annual turnover

For SMEs and startups, fines are capped at the lower of the two options (fixed amount vs. percentage). A company with EUR 10 million in revenue faces a maximum of EUR 300,000 for high-risk violations, not EUR 15 million. Still painful, but proportional.

The enforcement gap to watch: the KI-MIG passed through the Cabinet after processing over 1,000 amendment proposals. The final penalty structure survived intact, but the practical question is how aggressively the Bundesnetzagentur will enforce during the first year. German regulators historically favor warnings and remediation periods before levying fines. The GDPR rollout followed a similar pattern, with significant fines arriving 18-24 months after enforcement began.

What Companies Should Do Now

If you operate in Germany and deploy AI systems, here are concrete steps to take before August 2026.

1. Identify Your Regulator

Map each AI system to the correct authority. If it touches financial services, you report to BaFin. If it is embedded in a regulated product (medical device, machinery), the existing product regulator applies. Everything else goes to the Bundesnetzagentur. Knowing your regulator before enforcement starts means you can ask questions through the right channel.

2. Use the KI-Service-Desk

The desk has been operational since July 2025. If you are unsure whether your AI system qualifies as high-risk, ask. Getting documented regulatory guidance before enforcement begins creates a defensible compliance position. This is free, and the whole point of the service.

3. Apply for a Sandbox Slot

If you are building a novel AI application and regulatory classification is uncertain, apply to the KI-Reallabor program. The 30-day feedback guarantee means you get clarity faster than any external legal opinion can provide.

4. Audit Your AI Inventory

If you have not already mapped every AI system your company builds, buys, or deploys, start now. Include third-party AI embedded in SaaS tools, CRM systems, and analytics platforms. Many companies discover 3-5x more AI systems than they expected once they look beyond their own development teams.

Related: AI Agent Sprawl: Why Half Your Agents Have No Oversight

5. Check Works Council Requirements

The KI-MIG does not change Germany’s co-determination laws, but AI deployment in the workplace triggers existing works council rights under the Betriebsverfassungsgesetz. If your AI system monitors employee performance, screens job applications, or automates decisions that affect working conditions, the works council has co-determination rights. Start that conversation before the compliance deadline adds pressure.

How Germany Compares to Other EU Member States

Germany is not the first to move. France designated CNIL (its data protection authority) and created a dedicated AI office within the Direction Generale des Entreprises. Italy assigned oversight to the Agenzia per l’Italia Digitale with support from data protection and market authorities. Spain established AESIA, a brand-new AI agency, in December 2023.

Germany’s approach stands out for two reasons. First, it explicitly avoids creating a new regulatory body. The “lean regulation” philosophy reuses the Bundesnetzagentur’s existing infrastructure, which should mean faster operational readiness. Second, the mandatory sandbox with a 30-day response commitment is more concrete than what most other member states have announced.

The risk is coordination. Distributing AI oversight across BNetzA, BaFin, the Bundeskartellamt, and state-level data protection authorities creates complexity. The KoKIVO coordination center is supposed to prevent regulatory fragmentation, but how well that works in practice remains to be seen.

Frequently Asked Questions

What is Germany’s KI-MIG?

The KI-MIG (KI-Marktüberwachungs- und Innovationsförderungsgesetz) is Germany’s national implementation of the EU AI Act. Approved by the Federal Cabinet on February 11, 2026, it designates the Bundesnetzagentur as Germany’s central AI regulator, assigns BaFin as the authority for financial-sector AI, and establishes regulatory sandboxes and a KI-Service-Desk for company support.

Who is Germany’s AI regulator under the KI-MIG?

The Bundesnetzagentur (Federal Network Agency) is Germany’s central AI regulator under the KI-MIG. It serves as market surveillance authority, notifying body, EU contact point, and complaints office. BaFin handles financial-sector AI, and existing sector regulators keep their domains for AI embedded in regulated products like medical devices or machinery.

What are the penalties under Germany’s KI-MIG?

The KI-MIG adopts the EU AI Act’s penalty tiers: up to EUR 35 million or 7% of global turnover for prohibited AI practices, EUR 15 million or 3% for high-risk system violations, and EUR 7.5 million or 1% for providing false information to regulators. SMEs and startups face proportionally lower caps.

When does the KI-MIG take effect?

The KI-MIG was approved by the Federal Cabinet on February 11, 2026 and still requires Bundestag and Bundesrat approval. The key compliance deadline is August 2, 2026, when high-risk AI obligations and transparency requirements become enforceable across Germany.

What are KI-Reallabore under the KI-MIG?

KI-Reallabore are regulatory sandboxes established by the Bundesnetzagentur where companies can test innovative AI applications under simplified conditions. They feature low barriers to entry, digitized procedures, and binding regulatory feedback within 30 days. Germany must have at least one operational sandbox by August 2026.


Germany’s KI-MIG is the most significant German AI regulation to date. We track compliance developments across Europe. Subscribe to stay informed.