Anthropic published the Agent Skills specification on December 18, 2025. Within 48 hours, Microsoft integrated it into VS Code and OpenAI added it to both ChatGPT and Codex CLI. By March 2026, 32 tools from competing companies, including Google’s Gemini CLI, JetBrains’ Junie, AWS’s Kiro, and Block’s Goose, all read the same SKILL.md files from the same directory structure. The anthropics/skills repository hit 100K GitHub stars. Vercel’s skills.sh marketplace lists 89,753 skills. This is the fastest cross-vendor standardization event in AI tooling, and it happened because the entire spec fits in a document you can read during a coffee break.
The story of how it got there is not about technical superiority. It is about the specific conditions that made a “deliciously tiny specification,” as Simon Willison called it, irresistible for every player in the AI coding tool space to adopt rather than reinvent.
What SKILL.md Actually Specifies
The entire specification defines one thing: a directory containing a SKILL.md file with YAML frontmatter. That is the mandatory surface area. Everything else is optional.
---
name: deploy-vercel
description: Deploys applications to Vercel with zero-downtime configuration
---
# Deploy to Vercel
## When to Use
Activate when the user asks to deploy, ship, or push to production.
## Instructions
1. Check for vercel.json in the project root...
2. Run vercel --prod with the appropriate flags...
The frontmatter has two required fields: name (max 64 characters, lowercase plus hyphens) and description (max 1,024 characters). Optional fields include license, compatibility, metadata, and an experimental allowed-tools field for pre-approving tool access. The body is Markdown instructions that the agent reads and follows.
What makes this work at scale is the three-tier progressive disclosure model. At startup, an agent loads only the name and description from every installed skill, roughly 100 tokens per skill. A developer with 50 skills installed adds about 5,000 tokens of context overhead. The full SKILL.md body (recommended under 5,000 tokens) loads only when the agent activates that skill. Supporting files in scripts/, references/, or assets/ directories load only when the skill explicitly references them during execution.
This means you can have 100 skills installed without any of them competing for context window space until they are actually needed. Compare that to stuffing everything into a single CLAUDE.md or .cursorrules file, where every instruction competes for attention regardless of relevance.
The directory structure reflects this layering:
deploy-vercel/
├── SKILL.md # Required: metadata + instructions
├── scripts/ # Optional: executable helpers
│ └── check-config.sh
├── references/ # Optional: documentation
│ └── vercel-api.md
└── assets/ # Optional: templates
└── vercel.json.template
No runtime. No server. No build step. No package manager required. Just directories and Markdown files that you can version-control alongside your code in git.
The 48-Hour Adoption Chain
The timeline tells you more about this standard than its technical spec does.
Anthropic did not release Agent Skills as an open standard out of nowhere. The anthropics/skills repository was created on September 22, 2025. Skills appeared as a Claude Code feature in October. By early December, Simon Willison noticed that OpenAI’s Codex CLI had already merged a pull request titled “feat: experimental support for skills.md.” Elias Judin found a /home/oai/skills folder inside ChatGPT on December 12. Six days before the official open standard announcement, OpenAI was already building support for it.
On December 18, Anthropic published the specification at agentskills.io. What followed was the most compressed standardization event in recent developer tooling history:
- 48 hours: Microsoft integrated Agent Skills into VS Code via Copilot. OpenAI added support to ChatGPT and Codex CLI. The GitHub repository crossed 20,000 stars.
- January 20, 2026: Vercel launched skills.sh, a full marketplace with CLI installation. The top skill hit 20,000 installs within six hours.
- February 5, 2026: AWS’s Kiro IDE shipped Agent Skills support.
- March 2026: agentskills.io lists 32 adopters, including Google (Gemini CLI), JetBrains (Junie), Sourcegraph (Amp), Block (Goose), Snowflake, Databricks, ByteDance, Mistral AI, and Spring AI.
On December 9, nine days before the open standard release, the Linux Foundation announced the Agentic AI Foundation (AAIF) with Anthropic, OpenAI, and Block as founding members. MCP, AGENTS.md, and Goose were contributed as initial projects. By February 2026, AAIF had grown to 146 member organizations, providing an institutional home for the standard to mature under neutral governance.
Why Competitors Adopted Instead of Forking
Standards bodies spend years trying to get competing vendors to agree on wire formats. Agent Skills got 32 tools in 90 days. Three factors explain why.
The Spec Is Tiny Enough to Implement in a Day
The entire specification is two required YAML fields and a Markdown body. A competent engineer can add Agent Skills support to any tool in an afternoon. There are no complex serialization formats, no protocol negotiations, no auth flows, no runtime dependencies. Compare this to implementing MCP support, which requires running servers, handling OAuth, managing transport layers, and maintaining persistent connections.
When the cost of adopting a standard is near zero, the calculus shifts from “is this the best possible spec?” to “will my users expect this?” And once two major players adopted within 48 hours, the answer for everyone else became yes.
Skills Modularize Knowledge, Not APIs
MCP and Agent Skills are often compared, but they solve different problems. MCP provides structured API access to external tools and data sources. Agent Skills package procedural knowledge: how to perform tasks, which conventions to follow, what patterns to use.
Think of it this way: MCP lets an agent connect to your database. A skill teaches the agent your team’s migration conventions. MCP answers “what can I access?” Skills answer “how should I work?” They are complementary layers, and this meant Agent Skills did not compete with any vendor’s existing protocol investments.
Network Effects Kicked In Fast
Skills are portable by default. A skill written for Claude Code works in Codex CLI, Cursor, Gemini CLI, and Kiro without modification. This meant that the 89,753 skills on skills.sh were immediately useful to every new tool that adopted the standard. For tool makers, supporting Agent Skills meant instant access to a growing library. For skill authors, publishing one skill reached 32 platforms. The flywheel was self-reinforcing from week one.
Block’s engineering team published three principles for designing internal skills and runs over 100 skills for Goose across POS crash investigation, feature flag management, oncall runbooks, and API style enforcement. Enterprise partner skills from Atlassian, Figma, Canva, Stripe, Notion, and Zapier shipped within weeks of the standard’s release.
What the Quality Data Actually Says
Rapid adoption brought a familiar problem: quantity outpacing quality.
The Agent Skill Report analyzed 673 skills and found that 22% fail structural validation. Company-authored skills underperform community collections on compliance metrics. 52% of all tokens in skill repositories are non-standard files (LICENSE files, build artifacts, schemas) that waste context window when loaded. Most skills restate knowledge the LLM already has; the report found that novelty is the key differentiator between useful and useless skills.
Behavioral degradation is measurable. The report identified six mechanisms by which poorly written skills degrade agent performance: template propagation (impact score: -0.483), architectural pattern bleed (-0.317), token budget competition (-0.384), textual frame leakage (-0.233), API hallucination (-0.117), and cross-language code bleed. Installing a bad skill does not just fail to help. It actively hurts the agent’s baseline performance on unrelated tasks.
Simon Willison’s initial analysis noted that the spec is “quite heavily under-specified,” particularly around metadata fields and the experimental allowed-tools mechanism. Different tools implement discovery differently: Claude Code reads from .claude/skills/, OpenAI Codex from .agents/skills/, Google’s tools from ~/.gemini/antigravity/skills/. The spec defines the file format but not the installation path, which means skills work across tools but installation methods do not.
The Security Tax on Openness
The supply chain security picture is sobering. Snyk’s ToxicSkills study scanned 3,984 skills from ClawHub and skills.sh in February 2026. Results: 36% contain security flaws. 76 skills had confirmed malicious payloads. 341 hostile skills were traced to a single coordinated campaign called “ClawHavoc” that delivered the Atomic Stealer (AMOS) macOS infostealer.
The barrier to publish a skill: a SKILL.md file and a GitHub account that is one week old. No code signing. No security review. No sandbox by default. This is the price of a spec designed for zero-friction adoption. The same property that made it trivially easy for 32 legitimate tools to adopt also makes it trivially easy for attackers to publish malicious content.
Akamai’s threat analysis lists skill poisoning as a top-10 threat vector for agentic AI in 2026. The attack surface is uniquely dangerous because agent skills run inside tools that already have shell access, file system permissions, and network connectivity. A malicious npm package can access what Node.js allows. A malicious skill inherits the full permissions of the AI agent it runs inside.
The resolution will likely come from the Agentic AI Foundation providing governance structure, verified publisher programs, and provenance standards. But as of March 2026, the ecosystem is in the “move fast” phase. Security infrastructure is lagging behind adoption by months.
Where Agent Skills Fit in the Broader Stack
Agent Skills occupy a specific layer in the emerging agentic AI infrastructure:
MCP handles tool access: connecting agents to databases, APIs, and external services. A2A (Agent-to-Agent protocol) handles multi-agent communication. Agent Skills handle procedural knowledge: what an agent should do and how it should work in a given context. AGENTS.md defines project-level configuration and permissions.
This layering means a typical production setup might use MCP servers for Jira, Slack, and database access; Agent Skills for coding conventions, deployment workflows, and review checklists; and A2A for routing tasks between specialized agents. Each layer solves a different problem, and Agent Skills are the only layer that requires zero infrastructure to adopt.
The arxiv analysis of 40,285 publicly listed skills found that the median skill is 1,414 tokens (mean: 1,895). 90% are under 3,935 tokens. 99% are under 9,253 tokens. These are small, focused units of knowledge, not monolithic configurations. The format encourages specificity: one skill per concern, not one skill that tries to encode everything.
Frequently Asked Questions
What is the Agent Skills open standard?
Agent Skills is an open specification released by Anthropic on December 18, 2025, that defines a universal format for packaging procedural knowledge for AI coding agents. The core unit is a directory containing a SKILL.md file with YAML frontmatter and Markdown instructions. It has been adopted by 32 tools including Claude Code, OpenAI Codex, Cursor, VS Code, Gemini CLI, Kiro, and Goose.
How many tools support Agent Skills?
As of March 2026, 32 tools support the Agent Skills specification, including products from Anthropic (Claude Code), OpenAI (Codex CLI, ChatGPT), Microsoft (VS Code, GitHub Copilot), Google (Gemini CLI), JetBrains (Junie), AWS (Kiro), Block (Goose), Sourcegraph (Amp), Snowflake, Databricks, ByteDance (TRAE), and Mistral AI.
What is the difference between Agent Skills and MCP?
MCP (Model Context Protocol) provides structured API access to external tools and data sources. Agent Skills package procedural knowledge and instructions. MCP answers “what can the agent access?” while Skills answer “how should the agent work?” They are complementary layers. Skills require no server, no runtime, and no infrastructure: just Markdown files in a directory.
Are Agent Skills secure to install?
The ecosystem has significant security gaps. Snyk’s ToxicSkills study found that 36% of skills on major registries contain security flaws and 76 had confirmed malicious payloads. Skills run inside AI agents with shell access and file system permissions, giving malicious skills a larger blast radius than traditional malicious packages. Always review SKILL.md contents before installing and prefer skills from verified publishers.
How do I install Agent Skills?
Installation varies by tool. For Claude Code, place skill directories in .claude/skills/ in your project. For OpenAI Codex, use .agents/skills/. Vercel’s skills.sh marketplace provides a CLI: npx skills add author/skill-name. You can also manually clone or copy any skill directory from GitHub into your tool’s skills folder.
