Microsoft’s Defender Security Research Team published a top 10 list of Copilot Studio agent security risks in February 2026, and the punchline is sobering: none of the ten are exotic zero-day exploits. Every single one is a misconfiguration. Agents published without authentication. Agents sharing the maker’s credentials with every user. Agents sending emails to whoever the LLM decides. These are the defaults and shortcuts that ship agents to production with holes wide open.
The list comes with Microsoft Defender Advanced Hunting queries for each risk, making it one of the few vendor security advisories that hands you the detection tooling alongside the warning. If you run Copilot Studio in any capacity, this is your remediation checklist.
Access and Authentication Failures
Three of the ten risks deal directly with who can reach your agents and what credentials those agents use at runtime.
Unauthenticated Access
The most critical risk on the list. When authentication is turned off for testing and never re-enabled, or left in its default state because the maker assumed it was handled elsewhere, the agent becomes a public endpoint into your organizational data. Anyone with the URL can interact with it. Microsoft’s framing is direct: the agent “behaves like a public entry point into organizational data or logic.”
The Defender Advanced Hunting query for this is labeled “AI Agents: No Authentication Required” in Microsoft’s community hunting queries repository. Run it. If it returns results, those agents need to be quarantined immediately pending owner validation.
Maker Authentication Misuse
This one is subtle and easy to miss. When a Copilot Studio agent is configured with “author authentication,” it runs connected services using the maker’s credentials, not the invoking user’s credentials. Every user who interacts with that agent effectively operates with the maker’s permissions. If the maker is a senior admin, every user of that agent inherits senior admin access to the backend services.
The fix: enforce end-user authentication through Power Platform admin settings. For autonomous agents that need service accounts, create dedicated service identities with least-privilege scoping.
Broad Organizational Sharing
Publishing an agent to the entire organization sounds convenient until you consider that any user, including compromised accounts, can trigger its actions. Oversharing an agent that connects to sensitive data stores or has write permissions to production systems is functionally equivalent to handing everyone in the company a service account.
Microsoft recommends restricting sharing to role-based security groups and setting numerical limits on recipients per environment.
Data Exfiltration Vectors
Two risks on the list address how agents can become data leakage channels, particularly through email and knowledge source scoping.
Email-Based Exfiltration
Agents that use generative orchestration to send emails are inherently risky because the LLM determines recipients and content at runtime. In a successful indirect prompt injection attack, an attacker can instruct the agent to forward internal data to an external address. The attack surface expands when agents process external inputs: a poisoned document in SharePoint, a crafted email, or a manipulated calendar entry can all carry the injection payload.
The mitigation is blunt but effective: constrain email recipients to approved domain allowlists. Do not allow the orchestrator to freely choose recipients until policy-based controls mature.
Knowledge Source Over-Scoping
When an agent’s knowledge sources are too broad or poorly curated, it can access and surface data far beyond what its task requires. An attacker who can interact with the agent through prompt manipulation can extract sensitive information from those over-scoped knowledge bases. This mirrors ASI07 (RAG Poisoning) in the OWASP agentic framework, but from the configuration side rather than the data poisoning side.
Scope knowledge sources to the minimum data the agent actually needs. Audit what data each agent can reach, not just what it is supposed to use.
Infrastructure and Credential Risks
Three risks target the plumbing: how agents connect to external services, what credentials they carry, and what tooling they expose.
Unsafe HTTP Requests
Copilot Studio’s HttpRequestAction lets agents make raw HTTP calls. When a maker copies a sample request during testing or hits an internal endpoint for convenience, that request pattern can persist into production. Security researcher Tenable demonstrated that these HTTP actions could bypass SSRF protections and retrieve cloud instance metadata (IMDS), in one case obtaining tokens with read/write access to internal Cosmos DB resources.
The mitigation: enforce HTTPS on all endpoints, block non-standard ports, and prefer built-in connectors (which include identity validation and throttling) over raw HTTP requests.
Hard-Coded Credentials
API keys, tokens, and connection strings embedded directly in agent topic definitions or action configurations. The detection query hunts for literal secrets in agent definitions. The fix is table-stakes secret management: migrate everything to Azure Key Vault, use environment-referenced variables, and implement rotation policies.
Unmanaged MCP Tools
Model Context Protocol (MCP) tools give agents custom access paths between the model context and external systems. Because MCP tools are relatively new and potentially undocumented, they can create hidden action channels outside expected governance controls. Microsoft’s position: gate all MCP tool deployments with a security review, require explicit documentation of tool behavior and permissions, and run the “MCP Tool Configured” hunting query to inventory what is connected.
Lifecycle and Orchestration Gaps
The final two risks address what happens when agents are forgotten or deployed without operational guardrails.
Dormant and Orphaned Agents
Dormant agents are published agents that have not been used or reviewed for 30+ days. Orphaned agents are those whose owners have left the organization or had accounts disabled. Both represent shadow IT at its most dangerous: they retain active permissions, connected credentials, and network access, but nobody is watching them. The “Published Dormant 30d” and “Orphaned Agents with Disabled Owners” Advanced Hunting queries surface these artifacts.
The remediation pattern: enforce named ownership for every agent, conduct quarterly certification reviews, and auto-quarantine agents when their owner’s account is disabled.
Generative Orchestration Without Instructions
When a Copilot Studio agent uses generative orchestration but lacks explicit instruction sets, the LLM has maximum freedom and minimum guardrails. Without constraints, the orchestrator cannot limit its output scope, making the agent significantly more vulnerable to prompt injection and unintended behavior. Think of it as deploying an agent with a blank system prompt: technically functional, practically uncontrollable.
Every orchestration must include clear, explicit instructions that define what the agent can and cannot do, who it can contact, and what data it can access.
Mapping to the OWASP Agentic Framework
Microsoft’s top 10 is not a competitor to the OWASP Top 10 for Agentic Applications. It is a platform-specific implementation guide that maps naturally to the OWASP categories. Unauthenticated access and maker authentication misuse correspond to ASI03 (Identity Abuse). Email exfiltration maps to ASI01 (Agent Goal Hijack) when triggered by prompt injection. Knowledge source over-scoping aligns with ASI07 (RAG Poisoning). The value of Microsoft’s list is specificity: instead of abstract risk categories, you get “run this query, check this setting, change this config.”
The CoPhish attack demonstrated exactly why platform-specific guidance matters. Researchers used publicly accessible Copilot Studio agents hosted on trusted Microsoft domains to capture user access tokens through OAuth flows, gaining access to emails, chats, calendars, and OneNote data. The agents were public by default. Generic security advice would say “enforce authentication.” Microsoft’s specific guidance tells you which admin setting to change and which Defender query to run to find every agent with the same exposure.
Prioritized Remediation Order
If you are staring at this list wondering where to start, Microsoft’s implied priority is clear from the severity of each risk:
- Unauthenticated agents first. These are internet-facing attack surfaces. Run the detection query today.
- Maker-authenticated agents. Every one of these is a privilege escalation path.
- Orphaned agents. If the owner is gone, nobody will notice when the agent is compromised.
- Email-capable agents with unconstrained recipients. These are data exfiltration channels waiting for a prompt injection.
- HTTP request actions. The Tenable SSRF research proves these are exploitable.
- Everything else. Broad sharing, hard-coded credentials, MCP tools, instruction-less orchestration, and over-scoped knowledge sources.
The broader pattern here is that agent security is not about preventing sophisticated attacks. It is about fixing configurations that should never have shipped to production.
Frequently Asked Questions
What are the top Copilot Studio agent security risks?
Microsoft identified 10 Copilot Studio agent misconfigurations: unauthenticated access, broad organizational sharing, maker authentication misuse, unsafe HTTP requests, email-based exfiltration, hard-coded credentials, unmanaged MCP tools, missing orchestration instructions, dormant agents, and orphaned agents without active owners.
How do I detect Copilot Studio agent misconfigurations?
Microsoft provides Advanced Hunting queries in Microsoft Defender for each of the 10 risks. Key queries include “AI Agents: No Authentication Required” for unauthenticated agents, “Published Dormant 30d” for unused agents, and “Orphaned Agents with Disabled Owners” for unmanaged agents. These are available in Microsoft’s community hunting queries repository.
What is maker authentication misuse in Copilot Studio?
When a Copilot Studio agent uses “author authentication,” it connects to backend services using the maker’s credentials instead of the invoking user’s credentials. This means every user inherits the maker’s permission level, creating a privilege escalation path. The fix is to enforce end-user authentication through Power Platform admin settings.
How does the Copilot Studio top 10 compare to OWASP Top 10 for Agentic Applications?
Microsoft’s list is platform-specific rather than a generic taxonomy. It maps to OWASP categories (unauthenticated access maps to ASI03 Identity Abuse, email exfiltration to ASI01 Agent Goal Hijack, knowledge over-scoping to ASI07 RAG Poisoning) but provides concrete detection queries and remediation steps specific to Copilot Studio and Microsoft Defender.
What was the CoPhish attack on Copilot Studio agents?
CoPhish was a research attack where security researchers exploited publicly accessible Copilot Studio agents hosted on trusted Microsoft domains to capture user access tokens through OAuth flows. This gave them access to emails, chats, calendars, and OneNote data. The agents were public by default, demonstrating the danger of unauthenticated or broadly shared agents.
