Photo by Brett Sayles on Pexels Source

Germany’s BSI (Bundesamt für Sicherheit in der Informationstechnik) is building dedicated security criteria for AI agents, and enterprises that ignore the early signals will be retrofitting compliance into production systems under time pressure. The agency has published guidance on LLM evasion attacks, co-authored a Zero Trust framework for LLM systems with France’s ANSSI, and warned publicly that AI agents with extended permissions are the fastest-growing attack surface in enterprise IT.

BSI President Claudia Plattner put it bluntly: “Wer seine Angriffsflächen nicht schützt, wird Opfer.” Those who don’t protect their attack surfaces become victims. With NIS2 now in force in Germany and the EU AI Act’s high-risk rules arriving in August 2026, BSI’s AI agent security push is not theoretical guidance. It is the compliance baseline that auditors will check against.

Related: AI Agent Security and Governance: The Complete Enterprise Framework

BSI’s Core Position: AI Agents Are Non-Human Identities That Need Governing

Most enterprises treat AI agents as software features. BSI treats them as autonomous actors with their own identity, permissions, and blast radius. That distinction changes everything about how you architect security.

Each AI agent creates what security researchers call a non-human identity (NHI): it needs API keys, authentication tokens, and machine-to-machine credentials to operate. According to analysis by Kiteworks, non-human identities outnumber human users by 10:1 to 100:1 in most enterprise environments. Traditional IAM and PAM systems were never designed for autonomous agents that chain actions across multiple systems, operate across cloud boundaries, and escalate impact in seconds.

BSI’s published recommendations for LLM-based systems directly apply to agents and center on three principles:

Limit access rights ruthlessly. Agents should operate with the minimum permissions required for each specific task, not inherit broad user-level access. BSI’s joint paper with ANSSI calls for “limiting access rights as needed” as the first design principle, not an afterthought.

Make decisions transparent. Every action an agent takes must be logged and auditable. BSI requires that “decision-making be made transparent” so that incident responders can reconstruct what happened when (not if) something goes wrong.

Keep humans in the loop for critical decisions. BSI’s position is that “critical decisions happen under human supervision.” For AI agents, this means approval gates before destructive operations, data exfiltration-capable actions, or cross-system permission escalation.

The Specific Threats BSI Has Identified

BSI is not issuing abstract warnings. Their 2025 Lagebericht documented 119 new vulnerabilities per day (a 24% increase year-over-year), 950 ransomware attacks against German institutions, and noted that only 10% of German enterprises use AI for defense, while attackers are adopting it rapidly. Against that backdrop, BSI has flagged several AI agent-specific attack vectors.

Prompt Injection in Business Data

This is the attack BSI’s November 2025 publication on evasion attacks against LLMs addresses head-on. Attackers embed malicious instructions in ordinary business documents, emails, or database records. When an AI agent processes that data, it treats the embedded instructions as legitimate. CyberArk documented a real case where attackers placed prompts in shipping address fields that tricked a financial services AI agent into accessing invoicing tools and extracting vendor bank details.

BSI’s recommended countermeasures: secure system prompts that resist override attempts, malicious content filtering at every ingestion point, sandboxed execution environments, RAG pipelines restricted to trusted data sources, and explicit user confirmation before any agent executes functions with real-world impact.

Compromised Agent Skills and Plugins

BSI has specifically warned about agent “Skills”, the modular interaction components that extend what an AI agent can do. Open sharing of Skills through marketplace platforms creates the same supply chain risks that plagued npm and PyPI. A malicious Skill can contain backdoors, data exfiltration logic, or privilege escalation code that activates only after installation.

This is not hypothetical. MCP server vulnerabilities already demonstrated in 2025 that agent tool integrations are a viable attack surface, and researchers have documented self-replicating malicious packages that exploit stolen credentials.

Related: MCP Under Attack: CVEs, Tool Poisoning, and How to Secure Your AI Agent Integrations

Non-Human Identity Sprawl

Every AI agent deployment adds machine identities that accumulate permissions over time. 48% of KRITIS operators (Germany’s critical infrastructure operators) lack adequate attack detection systems. When an agent’s credentials are compromised, the blast radius depends entirely on how many permissions that identity has accumulated. Without active governance, agent permissions tend to grow monotonically: teams add access for new use cases but rarely revoke old ones.

How BSI’s Rules Connect to NIS2 and the EU AI Act

German enterprises running AI agents now face three overlapping regulatory frameworks, and BSI sits at the intersection of all three.

NIS2: The Immediate Deadline

Germany’s NIS2 implementation took effect in December 2025. Registration through BSI’s portal was required by April 2026. Roughly 29,000 essential and critical entities are affected. The key requirements that hit AI agent deployments:

  • 24-hour incident reporting. If an AI agent causes or enables a security breach, your organization must issue an early warning to BSI within 24 hours and a detailed notification within 72 hours.
  • Board-level liability. NIS2 makes board members personally liable for cybersecurity compliance failures. “We didn’t know our AI agent had access to customer data” is not a defense.
  • Supply chain security. You must audit and enforce security standards with all vendors, including AI agent platform providers and Skill/plugin suppliers.

BSI President Plattner has called the low NIS2 awareness among German enterprises “fatal”. Only 50% of affected organizations are aware of their obligations.

EU AI Act: High-Risk Rules Arriving August 2026

The German AI Market Surveillance and Innovation Act (KI-MIG), adopted by the German Cabinet on February 11, 2026, designates the Bundesnetzagentur as the primary AI market surveillance authority. BSI exercises a transitional surveillance role. AI agents used in HR, finance, critical infrastructure, or law enforcement will likely fall under high-risk classification, triggering extensive obligations: risk management systems, technical documentation, logging, transparency, human oversight, and cybersecurity requirements.

The practical impact: if your AI agent makes decisions that affect people (screening resumes, approving loans, flagging suspicious transactions), you need a documented risk management system, continuous monitoring, and the ability to demonstrate compliance to regulators who can examine your source code and enter your premises.

Related: Germany's KI-MIG: How the EU AI Act Becomes German Law

BSI’s AIC4: The Technical Compliance Framework

BSI’s AI Cloud Service Compliance Criteria Catalogue (AIC4) extends their C5 cloud security framework with seven AI-specific criteria areas covering the full AI lifecycle. While currently voluntary, AIC4 is the most detailed technical framework available from a German authority, and auditors increasingly reference it. For AI agent deployments in regulated industries (banking via BaFin, medical via BfArM), AIC4 criteria are effectively mandatory through sector-specific regulation.

What German Enterprises Should Do Now: BSI’s Six-Point Checklist

Based on BSI’s published guidance and the agency’s various position papers, here is the concrete action list for enterprises running AI agents.

1. Update your risk analysis to include AI agent threat scenarios. Traditional risk assessments don’t cover prompt injection, identity sprawl, or autonomous action chains. Add these to your risk register now.

2. Deploy agents in isolated execution environments. BSI explicitly recommends sandboxing and isolated systems. Run agents in containers or micro-VMs with strict network segmentation. Never give an agent direct access to production databases or internal networks without a security gateway.

3. Implement identity governance for every agent. Register each AI agent as a managed non-human identity. Enforce least-privilege, time-bound permissions. Deploy Identity Threat Detection and Response (ITDR) to catch credential misuse.

4. Audit your agent supply chain. Review every Skill, plugin, MCP server, and third-party integration your agents use. Apply the same scrutiny you would to a new software vendor. BSI specifically warns against unvetted agent Skills from open marketplaces.

5. Train your teams on AI-specific attack methods. BSI’s joint warning with CISA lists AI-specific dangers as the first thing staff must understand. Run tabletop exercises that simulate prompt injection, agent credential theft, and autonomous action gone wrong.

6. Maintain centralized audit logs. Every agent action, every API call, every permission change must be logged. NIS2’s 24-hour reporting requirement means you need forensic-ready logs, not just application-level metrics. Gartner recommends centralized audit trails capturing every data interaction as a baseline for AI agent governance.

Related: EU AI Act Compliance: What You Actually Need to Do Before August 2026

Frequently Asked Questions

What security rules is BSI requiring for AI agents?

BSI requires AI agents to operate under Zero Trust architecture with minimum necessary permissions, sandboxed execution environments, transparent decision logging, and human oversight for critical actions. They also recommend secure system prompts, input filtering, and treating every agent as a managed non-human identity with governed API access.

How does NIS2 affect AI agent deployments in Germany?

NIS2 requires organizations to report AI agent-related security incidents within 24 hours, makes board members personally liable for cybersecurity failures, and mandates supply chain security audits that extend to AI agent platform providers and plugin suppliers. Roughly 29,000 German entities are affected.

What is BSI’s AIC4 framework?

AIC4 (AI Cloud Service Compliance Criteria Catalogue) extends BSI’s C5 cloud security framework with seven AI-specific criteria areas covering the full AI lifecycle. While currently voluntary, it is the most detailed German technical framework for AI compliance and is increasingly treated as mandatory in regulated industries like banking and healthcare.

What are the biggest AI agent security risks according to BSI?

BSI identifies three primary risks: prompt injection through business data (attackers embedding malicious instructions in documents or database fields), compromised agent Skills and plugins from open marketplaces, and non-human identity sprawl where agent permissions accumulate unchecked across enterprise systems.

When do the EU AI Act high-risk rules take effect for AI agents in Germany?

The EU AI Act high-risk rules become effective in August 2026 and August 2027. Germany’s KI-MIG designates the Bundesnetzagentur as the primary surveillance authority. AI agents used in HR, finance, critical infrastructure, or law enforcement will likely be classified as high-risk, requiring documented risk management systems, continuous monitoring, and demonstrable compliance.