Photo by Safar Safarov on Unsplash Source

VS Code 1.109 ships with native support for running Claude, Codex, and Copilot agents simultaneously inside the same editor. Not as extensions that wrap an API. As first-class agent runtimes with their own session management, parallel subagent execution, and tool sandboxing. The January 2026 release transforms the most popular code editor in the world from a tool with AI features into an orchestration hub where multiple AI agents collaborate on your codebase, each running in isolated contexts with constrained permissions.

This matters because the bottleneck in AI-assisted development is no longer “which model is best.” It is how you coordinate multiple models, each with different strengths, across a codebase that exceeds any single model’s context window. VS Code 1.109 is Microsoft’s answer: make the IDE the operating system for AI agents.

Related: GPT-5.3-Codex vs. Claude Opus 4.6: The Coding Agent Wars

Subagents: Context Isolation That Actually Works

The headline feature in 1.109 is subagents. Before this release, every AI interaction in VS Code shared a single conversation context. Ask the agent to research a dependency, explore an API, and refactor a module, and all three tasks consumed the same token budget. Context windows filled up. Results degraded.

Subagents solve this by running in completely isolated context windows. Your main agent delegates a task, the subagent executes it independently, and only the final result returns to the primary context. The intermediate exploration, the dead ends, the iterative refinement: all of it stays contained.

The practical impact is significant. A subagent researching your project’s test patterns does not consume any of the context budget your main agent needs for the actual refactoring work. VS Code can also run multiple subagents in parallel, splitting independent tasks across simultaneous execution threads.

Configuring Subagent Behavior

Custom agents can control subagent access through frontmatter attributes in .github/copilot-agents/ prompt files:

---
name: TDD
tools: ['agent']
agents: ['Red', 'Green', 'Refactor']
---
Write tests, then implement, then clean up.

The agents property accepts a list of specific agent names, * for all available agents, or [] to disable subagent invocation entirely. Two additional properties control visibility:

  • user-invokable: false hides the agent from the dropdown, making it available only as a subagent
  • disable-model-invocation: true prevents other agents from calling this agent

This is the kind of permission model that matters once you have more than one agent operating in your workspace. Without it, a general-purpose agent could invoke a deployment agent, which could invoke a database migration agent. Constraining which agents can call which other agents prevents cascading automations that nobody asked for.

Related: AI Agent Frameworks Compared: LangGraph, CrewAI, AutoGen

Three Execution Modes: Local, Background, Cloud

VS Code 1.109 formalizes three distinct modes for running agents, each with different tradeoffs:

Local agents run on your machine inside VS Code with full tool access. They see your workspace, your terminal, your extensions. Best for interactive work where you want to steer the agent in real time.

Background agents run via CLI on your local machine but outside VS Code’s UI. They work in Git worktrees to avoid conflicting with your active workspace. Ideal for well-defined tasks you want to fire off and review later.

Cloud agents run on remote infrastructure and integrate with GitHub’s PR workflow. They cannot access VS Code’s built-in tools or your local runtime, but they produce pull requests that your team can review asynchronously.

The key design choice: you can hand off work between these modes mid-conversation. Start a task locally to explore the problem, switch to a background agent for the implementation, then push to a cloud agent for the PR. VS Code carries over the full conversation history when you switch.

Claude and Codex as First-Class Citizens

The most striking change in 1.109 is that Claude and Codex agents are not wrapped in VS Code’s existing Copilot abstraction. They run using their respective native SDKs, meaning Claude Agent uses Anthropic’s official harness with the same prompts, tools, and architecture as standalone Claude Agent deployments.

Codex runs locally with a Copilot Pro+ subscription. Claude Agent is in public preview. Both operate within the unified Agent Sessions view, so you manage all your agents from one interface regardless of which model powers them.

This creates an interesting dynamic: you can use Codex for quick interactive tasks (it runs 25% faster than its predecessor) and Claude for complex autonomous refactoring (it has a 1M token context window). Same IDE, same session management, different models for different jobs.

Agent Skills and Orchestrations

Agent Skills, which shipped in preview last year, are now generally available in 1.109. Skills are packaged bundles of instructions that teach agents domain-specific behavior: testing strategies, API design conventions, performance optimization patterns.

They work through the chatSkills contribution point, which means extension authors can distribute skills via the VS Code marketplace. Your team can publish internal skills that encode your specific conventions, and every agent in every developer’s workspace inherits them.

The practical application here is consistency. When ten developers on your team each use agent mode with different prompting habits, you get ten different coding styles. Skills standardize the output. Configure them via chat.agentSkillsLocations or use the /init command, which analyzes your project structure and generates workspace-specific instructions automatically.

The Conductor Pattern

Community projects like Copilot Orchestra demonstrate what becomes possible with subagent orchestration. A “conductor” agent breaks a task into phases (planning, implementation, code review), delegates each phase to a specialized subagent, and synthesizes the results. Each subagent uses the tools and instructions appropriate to its role.

This pattern is essentially the same architecture that Anthropic used to build a C compiler with 16 parallel Claude agents, but running inside your IDE rather than a custom Docker orchestration.

Related: What Are AI Agents? A Practical Guide for Business Leaders
Related: AI Agent Skills Marketplace: The New Plugin Ecosystem

Terminal Sandboxing and Trust

Running multiple AI agents in your editor creates a real security surface. An agent with terminal access can run arbitrary commands. Two agents sharing a workspace can step on each other’s changes. A malicious MCP server could instruct an agent to exfiltrate your source code.

VS Code 1.109 addresses this with terminal sandboxing (currently experimental on macOS and Linux). Enable it with chat.tools.terminal.sandbox.enabled, and agent-executed commands run inside restricted environments with configurable file and network access policies. An agent can read your source code but not touch your SSH keys. It can run tests but not curl data to external endpoints.

Auto-approval rules (chat.tools.terminal.enableAutoApprove) complement sandboxing by letting safe commands like dir, docker ps, and npm test execute without confirmation prompts, while anything outside the allowlist requires explicit approval. The result is less friction for routine operations and more control over risky ones.

Copilot Memory

Agents in 1.109 gain persistent memory across sessions via Copilot Memory (enable with github.copilot.chat.copilotMemory.enabled). The agent remembers preferences like “always ask clarifying questions before making changes” or “prefer functional patterns over class-based components.”

This sounds like a small feature, but it solves one of the biggest annoyances with AI coding agents: repeating the same instructions every session. When you combine memory with skills and custom agent definitions, each developer gets an AI team that knows their preferences, their team’s conventions, and their project’s architecture from the first prompt.

What This Means for Development Teams

VS Code 1.109 is not just another feature release. It is a platform shift. Microsoft is positioning the editor as the runtime for multi-agent development, the same way they positioned Windows as the runtime for desktop applications thirty years ago.

The practical implications for teams adopting AI agents:

Vendor lock-in decreases. When Claude, Codex, and Copilot all run inside the same session management layer, switching between models becomes a dropdown change. You are not locked into one provider’s ecosystem.

Agent governance gets easier. Subagent constraints, terminal sandboxing, and auto-approval rules give teams the controls they need to deploy agents without losing oversight. This matters especially for organizations working under the EU AI Act, where traceability requirements apply to AI-assisted development processes.

Context management becomes explicit. The subagent model forces you to think about which tasks need shared context and which should be isolated. This is the same architectural thinking that makes distributed systems work: explicit boundaries, clear contracts, independent failure domains.

The biggest question 1.109 raises is not about VS Code. It is about what happens when the IDE becomes the orchestration layer for a team’s entire AI agent fleet. If your code review agent, your testing agent, your documentation agent, and your deployment agent all run inside VS Code with persistent memory and shared skills, the editor is no longer a text editor. It is an agent operating system.

Frequently Asked Questions

What are subagents in VS Code 1.109?

Subagents are context-isolated AI agents that run independently from your main chat session. Your main agent delegates tasks to subagents, they execute in their own context windows, and only the final result returns to your primary context. This prevents context window bloat and allows parallel execution of independent tasks.

Can I run Claude and Codex agents inside VS Code?

Yes. VS Code 1.109 supports Claude Agent (in public preview) and Codex agents alongside GitHub Copilot. Claude uses Anthropic’s official Agent SDK, while Codex runs locally or in the cloud. Both require a Copilot Pro+ or Enterprise subscription. All agents are managed through the unified Agent Sessions view.

What is terminal sandboxing in VS Code agent mode?

Terminal sandboxing restricts what commands AI agents can execute. Enabled via the chat.tools.terminal.sandbox.enabled setting, it lets you configure file and network access policies for agent-executed terminal commands. Currently experimental and available on macOS and Linux only.

What are VS Code agent skills?

Agent skills are packaged instruction bundles that teach AI agents domain-specific behavior like testing strategies, API design conventions, or performance optimization patterns. They are now generally available in VS Code 1.109 and can be distributed via extensions using the chatSkills contribution point. Configure them with the chat.agentSkillsLocations setting.

How does Copilot Memory work in VS Code 1.109?

Copilot Memory enables AI agents to retain relevant context and preferences across sessions. Enable it with github.copilot.chat.copilotMemory.enabled. The agent stores and recalls information like coding style preferences, project conventions, and interaction patterns, eliminating the need to repeat instructions in every new session.