Superpowers is an open-source framework that forces AI coding agents to follow a structured engineering workflow: brainstorming, planning, test-driven development, subagent execution, and code review. Built by Jesse Vincent and his team at Prime Radiant, the project hit 97,200 stars on GitHub and landed in the official Anthropic plugin marketplace for Claude Code. The premise is simple: AI agents can write code fast, but fast code without process is just technical debt with extra steps.
If you have been fighting with AI-generated code that works in demo but falls apart in production, Superpowers attacks the root cause. It does not make the model smarter. It makes the model follow the same discipline a senior engineer would enforce on a team.
Why AI Agents Write Bad Code (and How Superpowers Fixes It)
The core problem is not capability. Claude, GPT-4, and Gemini can all produce correct code for well-defined tasks. The problem is process. Ask an AI agent to “add authentication to this app” and it will immediately start writing code. No questions about requirements. No design discussion. No tests. It generates 300 lines, declares victory, and moves on. If something breaks, it patches the symptom instead of fixing the root cause.
Superpowers fixes this by intercepting the agent’s workflow before it writes a single line of code. The framework installs as a set of composable “skills” that trigger automatically based on context. When you ask your agent to build something, Superpowers takes over the process:
Phase 1: Brainstorming. Instead of coding, the agent asks questions. It explores requirements, surfaces edge cases, and proposes alternatives. The design gets presented in chunks short enough to actually read and approve. This Socratic approach catches misunderstandings before they become 500-line refactors.
Phase 2: Planning. After you approve the design, the agent breaks the work into bite-sized tasks (2 to 5 minutes each). Every task includes exact file paths, code snippets, and verification steps. The plan is detailed enough that, as the docs put it, “an enthusiastic junior engineer with poor taste, no judgment, no project context, and an aversion to testing” could follow it.
Phase 3: Test-Driven Development. This is where Superpowers gets opinionated. Tests come first, always. The agent writes a failing test, watches it fail, writes minimal code to pass it, then refactors. If code exists without tests, the framework instructs the agent to delete it. No exceptions.
Phase 4: Subagent Execution. Individual tasks get dispatched to specialized subagents that work in isolated git worktrees. This means the agent is not polluting your main branch with half-finished work. Each subagent operates independently, proves its code works in a clean environment, and reports back.
Phase 5: Code Review. Before any task is considered complete, Superpowers runs a two-stage review. First, it checks against the original spec: does this actually do what was planned? Second, it evaluates code quality, test coverage, security, and performance. Critical issues block the merge. This is the same pattern used by engineering teams at companies like Google and Meta, applied to an AI agent’s output.
The 11 Core Skills
Superpowers ships with 11 composable skills that handle every phase of the development cycle. Unlike traditional frameworks where orchestration logic lives in your application code, these skills are embedded instructions that shape agent behavior:
Development Workflow Skills
- brainstorming: Activates before any coding begins. Refines ideas through questions, explores alternatives, presents design in digestible sections.
- writing-plans: Converts approved designs into step-by-step implementation plans with exact file paths and verification criteria.
- executing-plans: Manages the sequential execution of plan steps, tracking progress and handling dependencies.
- subagent-driven-development: Dispatches individual tasks to fast, focused subagents. Two-stage review catches issues before they propagate.
Quality Enforcement Skills
- test-driven-development: Enforces red-green-refactor cycles. Every feature starts with a failing test, not with implementation code.
- requesting-code-review: Triggers the two-phase review (spec compliance, then quality) before completion.
- receiving-code-review: Handles feedback from reviews, prioritizing critical blockers over style suggestions.
- systematic-debugging: A 4-phase root cause analysis process that prevents the “try random fixes until something works” pattern most agents default to.
- verification-before-completion: Final check that all tests pass, requirements are met, and no regressions were introduced.
Git and Meta Skills
- using-git-worktrees: Creates isolated branches for each feature, keeping your main branch clean during development.
- writing-skills: A meta-skill for creating new custom skills, allowing teams to extend the framework with domain-specific workflows.
The key difference from frameworks like LangGraph or CrewAI is that these are not orchestration APIs you call from code. They are behavioral instructions that shape how the agent thinks and works. You do not import a library. You install a methodology.
Where Superpowers Fits (and Where It Does Not)
Superpowers is not competing with LangChain, LangGraph, or CrewAI directly. Those frameworks handle agent orchestration: routing, tool use, memory, and multi-agent coordination for production applications. Superpowers handles something different: it teaches coding agents to write better software during development.
Think of it this way: LangGraph helps you build an AI agent that processes customer support tickets. Superpowers helps your AI coding assistant build that LangGraph application correctly, with tests, specs, and clean architecture.
Best For
- Solo developers using AI coding agents daily. If you rely on Claude Code, Cursor, or Codex for feature development, Superpowers turns your agent from a code generator into a pair programmer that follows engineering discipline.
- Teams onboarding AI into existing codebases. The TDD enforcement and spec-first approach reduces the “AI slop” problem where generated code degrades codebase quality over time.
- Projects where correctness matters more than speed. Financial services, healthcare, infrastructure. Anywhere a bug is more expensive than a 10-minute planning phase.
Not Ideal For
- Quick prototypes and throwaway scripts. The brainstorming and planning phases add overhead. If you need a one-off data transformation script, the ceremony is not worth it.
- Non-coding agent applications. Superpowers is specifically about software development. If you are building agents for customer service, sales, or data analysis, look at orchestration frameworks instead.
- Teams that need custom orchestration logic. Superpowers is opinionated by design. If your workflow does not match its 7-phase model, you will fight the framework instead of benefiting from it.
Getting Started in 5 Minutes
Superpowers supports five platforms as of March 2026. Installation is a single command on each:
Claude Code (official plugin):
/plugin install superpowers@claude-plugins-official
Cursor:
/add-plugin superpowers
Codex, OpenCode, Gemini CLI: Each has platform-specific installation documented in the repository.
After installation, there is nothing to configure. The skills activate automatically based on context. Ask your agent to build a feature, and you will notice the difference immediately: it starts asking questions instead of writing code.
For teams, the writing-skills meta-skill lets you create custom skills that encode your project’s conventions. If your team requires specific testing patterns, architecture decisions, or documentation standards, you can express them as Superpowers skills that the agent follows automatically.
Superpowers vs. Going Raw: What Changes in Practice
The best way to understand Superpowers is through what it prevents. Better Stack’s in-depth guide walked through building a web application with Superpowers enabled. The agent caught and fixed its own bugs during the review phase, before delivering the final code. Without the framework, those bugs would have landed in a pull request or, worse, in production.
Developers report the biggest difference in two areas. First, the brainstorming phase catches requirement misunderstandings that normally surface three commits later as “oh wait, that is not what I meant.” Second, TDD enforcement means the agent produces code that is actually testable, not monolithic functions that resist unit testing.
The trade-off is real: the planning phases add 5 to 15 minutes to any non-trivial task. For a complex feature, that is time well spent. For renaming a variable, it is overhead. The Superpowers documentation acknowledges this directly and recommends disabling the framework for trivial changes.
With 97,200 GitHub stars and growing, Superpowers has become the fastest signal that the AI coding tool ecosystem is maturing past “generate code fast” toward “generate code correctly.” Whether you adopt the framework or not, the principles it enforces (spec first, test first, review always) are worth stealing for any team using AI coding agents.
Frequently Asked Questions
What is the Superpowers agentic skills framework?
Superpowers is an open-source framework that enforces a structured software development workflow on AI coding agents. It includes 11 composable skills covering brainstorming, planning, test-driven development, subagent execution, and code review. Created by Jesse Vincent and Prime Radiant, it has over 97,000 GitHub stars and works with Claude Code, Cursor, Codex, OpenCode, and Gemini CLI.
How does Superpowers enforce TDD on AI coding agents?
Superpowers enforces a strict red-green-refactor cycle. The AI agent must write a failing test first, verify the test actually fails, write minimal code to make it pass, then refactor. If code exists without accompanying tests, the framework instructs the agent to delete it. This prevents the common pattern of AI agents generating untested code.
Is Superpowers a LangChain alternative?
No. Superpowers and LangChain solve different problems. LangChain is an orchestration framework for building AI agent applications (chatbots, RAG systems, tool-using agents). Superpowers is a development methodology framework that teaches AI coding assistants to write better software through structured workflows, TDD, and code review. They can be used together: Superpowers helps your coding agent build LangChain applications correctly.
Which AI coding tools work with Superpowers?
As of March 2026, Superpowers officially supports Claude Code (via the Anthropic plugin marketplace), Cursor, Codex, OpenCode, and Gemini CLI. Claude Code has the most mature integration through the official plugin system.
What are the downsides of using Superpowers?
The main trade-off is overhead. The brainstorming and planning phases add 5 to 15 minutes to non-trivial tasks, which is not worthwhile for quick fixes or throwaway scripts. The framework is also opinionated: if your development workflow does not match its 7-phase model, you may find it restrictive. It is specifically designed for software development tasks and does not apply to other types of AI agent applications.
