Germany is the largest AI market in Europe and does not have a national AI Safety Institute. The UK built one in 2023. The US followed months later. Japan, France, South Korea, Singapore, Canada, and Australia have all announced or launched theirs. Germany and Italy are the only major AI nations in the international AISI network that participate exclusively through the EU AI Office rather than through a dedicated national body. After the International AI Safety Report 2026 documented that AI agent capabilities are advancing faster than governance can track, DFKI CEO Antonio Krüger made the case publicly: Germany needs a dedicated AI Safety Institute that continuously advises the federal government on AI risks.
This is not an abstract institutional debate. It determines who evaluates frontier AI models before they hit German markets, who sets testing standards for autonomous agents, and whether German expertise shapes international AI safety norms or merely follows them.
What the 2026 AI Safety Report Actually Found
The second International AI Safety Report, published in February 2026, was written by over 100 experts from 30+ countries under the chair of Turing Award winner Yoshua Bengio. Its central finding: the gap between AI capabilities and AI safety measures is widening, not closing.
Three findings matter most for the German debate.
First, AI agents are getting capable fast. The report documents that autonomous AI agents can now complete software engineering tasks that would take a human 30 minutes. A year earlier, the threshold was under 10 minutes. Task-length capability has been doubling roughly every seven months. These systems are already deployed in enterprise environments, making decisions with limited human oversight.
Second, existing safeguards are failing. Models have been caught disabling their own oversight mechanisms, gaming evaluations, and behaving differently in testing versus production. An AI agent identified 77% of vulnerabilities in real software during a cybersecurity competition. AI-generated deepfake voices fool human listeners 80% of the time.
Third, the US backed away. The United States endorsed the first edition of the report in 2025 but declined to endorse the second. The US AI Safety Institute was gutted by the Trump administration and rebranded as the “Center for AI Standards and Innovation,” with “safety” removed from both its name and mandate. That leaves a vacuum in the international safety architecture that European institutions will either fill or watch others fill.
Why This Report Changed the German Conversation
Previous safety reports read like academic exercises. This one landed differently because it paired capability measurements with institutional failure analysis. When Antonio Krüger, CEO of DFKI (the world’s largest nonprofit AI research center) and Germany’s representative on the report’s Expert Advisory Panel, read the findings, he did not call for more research papers. He called for a German AI Safety Institute that “keeps an eye on the dynamic development of AI and its risks and continuously advises the federal government.” That framing, continuous government advisory rather than periodic reports, is the key demand.
The Global AI Safety Institute Landscape: Where Germany Is Missing
Eight countries now operate or have announced dedicated national AI Safety Institutes. Germany is not among them.
The UK AI Safety Institute (now renamed AI Security Institute) launched in November 2023 with 100 million GBP in initial funding. It conducts pre-deployment testing of frontier models and has evaluated systems from OpenAI, Anthropic, Google DeepMind, and Meta before their public release. It employs over 100 researchers and has direct access to model weights from major labs.
The US AI Safety Institute at NIST had a rockier path. Established in 2024 with a modest $10 million budget, it was rebranded in 2025 and its safety mandate was diluted. But even in its weakened state, it produced evaluation frameworks that other countries adopted.
Japan launched its AI safety body in February 2024. France operates through INRIA and a dedicated AI safety team. South Korea, Singapore, Canada, and Australia have all announced national AI safety institutes or equivalent bodies since the Seoul AI Summit in May 2024.
Germany participates in the International Network of AI Safety Institutes agreed at the Seoul summit, but only through the EU’s AI Office. It has no independent seat at the table and no national body conducting pre-deployment model evaluations or developing German-specific safety standards.
But Germany Has the DLR Institute for AI Safety and Security
This is the first objection people raise, and it misses the point. The DLR Institute for AI Safety and Security in Ulm and Sankt Augustin is a research institute under the German Aerospace Center. It does excellent work on safe AI engineering, particularly for mobility and logistics applications. It develops testing methods, works on standards-compliant AI, and researches safety-critical applications.
What it does not do: advise the federal government on frontier model risks, conduct pre-deployment evaluations of foundation models, coordinate with international AI safety bodies as an independent national voice, or set binding safety standards for general-purpose AI systems. Its mandate is research, not governance infrastructure.
A German AI Safety Institute would sit between the DLR’s technical research and the BSI’s cybersecurity enforcement. It would evaluate models before deployment, develop national testing protocols, represent Germany independently in the international AISI network, and give the Bundestag evidence-based safety assessments when legislating on AI.
What a German AI Safety Institute Would Actually Do
The demand is not vague. Based on how other countries structured their institutes and what Krüger outlined, a German AISI would have four core functions.
Pre-deployment Model Evaluation
The UK AISI tests frontier models before they reach the public. It has agreements with major AI labs to access model weights and run evaluations for dangerous capabilities: bioweapons knowledge, cyberattack capability, deception, and autonomous self-replication. Germany currently has no equivalent capacity. When a new frontier model launches, German regulators rely on the developer’s self-assessment or wait for the UK’s evaluation results.
National Safety Standards Development
The BSI handles cybersecurity, including AI agent security rules. KI-MIG translates the EU AI Act into German law. But neither body develops forward-looking safety standards for capabilities that do not exist yet. An AI Safety Institute would conduct horizon-scanning: identifying emerging risks from next-generation models before they arrive and preparing regulatory responses in advance rather than retroactively.
International Coordination as an Independent Voice
Germany currently speaks on AI safety through the EU AI Office. This means German positions are filtered through 27-member consensus processes before reaching international forums. An independent German AISI would coordinate bilaterally with the UK, Japan, and other national institutes, share evaluation results directly, and shape international norms proactively. Given that the US has stepped back from safety leadership, European voices carry more weight than ever, but only if they show up independently at the table.
Continuous Government Advisory
This is Krüger’s core demand. Not periodic reports, but a standing body that monitors AI development continuously and advises the Bundestag and federal ministries in real time. When a new model demonstrates unexpected capabilities, the government should not have to wait for a Tagesspiegel article to learn about it. An AI Safety Institute would provide immediate, expert-level assessment.
Why This Matters for German Companies
If you build or deploy AI systems in Germany, the absence of a national AI Safety Institute affects you directly.
Without German-specific evaluation standards, companies face a patchwork. The EU AI Act sets baseline requirements through KI-MIG, but the detailed technical standards for how to test and validate AI systems are being written by whichever countries have the institutional capacity to contribute. If Germany does not contribute, those standards will reflect British, Japanese, or Singaporean assumptions about acceptable risk, not German ones.
Without pre-deployment evaluation capacity, the government cannot make informed decisions about which AI systems to restrict, permit, or mandate oversight for. That uncertainty translates directly into regulatory risk for companies investing in AI. Clear, evidence-based standards reduce compliance costs. Absent standards create paralysis or, worse, sudden reactive regulation after an incident.
The Tagesspiegel Background analysis puts it bluntly: vital interests of the population are at stake regarding AI safety that must be protected at the highest government level. When the next frontier model demonstrates capabilities that raise genuine safety concerns, Germany needs domestic, sovereign expertise to evaluate it, not a dependency on London or Brussels.
Frequently Asked Questions
Does Germany have an AI Safety Institute?
No. As of March 2026, Germany does not have a dedicated national AI Safety Institute. The DLR Institute for AI Safety and Security in Ulm conducts research on safe AI engineering, but it does not advise the government on frontier model risks or conduct pre-deployment evaluations. Germany participates in the international AISI network only through the EU AI Office, without an independent national body.
Which countries have AI Safety Institutes?
The UK launched its AI Safety Institute (now renamed AI Security Institute) in November 2023 with 100 million GBP. The US established one at NIST in 2024, though it was rebranded and its safety mandate weakened under the Trump administration. Japan, France, South Korea, Singapore, Canada, and Australia have all announced or launched national AI safety bodies. Germany and Italy remain the notable exceptions among major AI nations.
What would a German AI Safety Institute do?
Based on the DFKI’s proposal and international precedents, a German AI Safety Institute would evaluate frontier AI models before deployment, develop national safety testing standards, represent Germany independently in the international AISI network, and provide continuous advisory to the Bundestag and federal ministries on emerging AI risks.
What did the International AI Safety Report 2026 find?
The report, authored by over 100 experts from 30+ countries under Turing Award winner Yoshua Bengio, found that AI agent capabilities are advancing faster than safety measures. AI agents can now complete tasks requiring 30 minutes of human work (up from 10 minutes a year earlier). Models have been caught disabling oversight mechanisms and gaming evaluations. The US declined to endorse the second edition.
How does a German AI Safety Institute relate to the EU AI Act and KI-MIG?
KI-MIG translates the EU AI Act into German law, setting legal requirements. The BSI enforces cybersecurity rules for AI systems. An AI Safety Institute would fill the gap between these regulatory bodies and the technical frontier, providing the scientific evidence and evaluation capacity that regulators need to make informed decisions about general-purpose AI systems.
