OpenCode is a free, open-source coding agent that hit 120,000 GitHub stars by solving the one problem every commercial AI coding tool ignores: vendor lock-in. Built by the team behind SST (the open-source serverless framework), it runs in your terminal, connects to 75+ LLM providers, and costs exactly zero dollars. You bring your own API key, or point it at a local model running on your machine. Five million developers use it every month, making it the second most-starred agentic AI repo on GitHub after Langflow.
That star count is not vanity. It represents a bet that the coding agent should be open infrastructure, not a subscription product.
Why OpenCode Exists: The Provider Lock-In Problem
Every commercial coding tool forces a choice. Cursor ties you to their credit system. Claude Code requires a $20/month Anthropic subscription. GitHub Copilot locks you into the GitHub ecosystem. Switch providers, and you lose your workflow.
OpenCode rejects this model entirely. Its architecture is provider-agnostic by design, supporting OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, OpenRouter, and local models through Ollama, llama.cpp, and LM Studio. You configure your provider in a single config file and switch between them without changing anything else about how you work.
This matters more than it sounds. When Anthropic releases a new Claude model that benchmarks higher on SWE-bench, you switch to it in 30 seconds. When a cheaper provider drops prices, you move your everyday coding to them and save the expensive model for complex refactors. When your company mandates that no code leaves the network, you point OpenCode at a self-hosted model and keep working.
The 800+ contributors building OpenCode are not doing it because open-source is fashionable. They are doing it because the alternative is paying three different subscriptions to test which AI model works best for their codebase.
Architecture: A Go CLI With a SolidJS Soul
Most coding agents are either Electron apps pretending to be lightweight or Python scripts held together with string formatting. OpenCode is neither. It is written in Go, which gives it startup times under 200ms and a single binary with zero dependencies.
The interesting architectural choice is the TUI layer. OpenCode uses a client/server split: the Go backend handles LLM communication, file operations, and tool execution, while the terminal interface is built with OpenTUI, a SolidJS-based rendering library for the terminal. The two communicate over HTTP and Server-Sent Events.
This is unusual. Most terminal tools use libraries like Bubble Tea or Charm for their UI. OpenCode built its own reactive rendering framework specifically so the terminal interface could handle real-time streaming, multi-pane layouts, and responsive updates without blocking the agent’s work.
What you get in practice:
- Session management: Conversations persist in a local SQLite database. Resume where you left off, branch a conversation, or search through past sessions.
- LSP integration: OpenCode connects to your project’s Language Server Protocol, giving the AI the same code intelligence your editor has: jump-to-definition, find-references, type information.
- Tool execution: The agent can run shell commands, modify files, read project structure, and iterate on test failures. Standard agentic coding patterns, but running locally with no cloud sandbox.
- Vim-style editing: The text input supports vim keybindings natively. Not an afterthought mode, but the default for developers who already think in hjkl.
The single binary distribution matters for teams. No Node.js runtime, no Python virtual environment, no Docker container. Download, set an API key, and start coding. DevOps teams deploying it to CI/CD pipelines appreciate this more than individual developers do.
OpenCode vs Claude Code vs Cursor: Where Each One Wins
The comparison is not about which tool is “better.” Each occupies a different position on the control-cost-capability triangle.
Context Window and Reasoning Depth
Claude Code’s 1 million token context window is its defining advantage. It can hold an entire medium-sized codebase in working memory and reason across files that have nothing to do with each other. OpenCode’s context window depends on which provider you choose: with Claude via API, you get the same context window but pay per token instead of per month. With GPT-4o, you get 128K tokens. With a local Llama model, you might get 8K.
This is the core trade-off. OpenCode gives you freedom to choose, but the ceiling of what it can do depends on your choice of model.
Cost Structure
| OpenCode | Claude Code | Cursor | |
|---|---|---|---|
| Base cost | $0 | $20/mo (Pro) | $20/mo (Pro) |
| API costs | Pay-per-token to your provider | Included in subscription (with limits) | Credit-based system |
| Heavy usage | Can get expensive with powerful models | $100-200/mo for Max tiers | $200/mo (Ultra) |
| Budget option | Free with local models | None | 2,000 free completions |
| Team pricing | $0 + API costs | $30/seat/mo | $40/seat/mo |
For light to moderate usage (under 50 substantive queries per day), OpenCode with a mid-tier API key typically costs $15-30/month. That is competitive with Claude Code Pro. But heavy agentic usage, where the agent iterates on test failures across multiple files, can burn through $50-100/month in API costs with top-tier models.
The budget option is where OpenCode has no competition. Running Ollama with a capable local model like Llama 3.1 70B costs nothing beyond your electricity bill. The quality is lower than Claude Opus or GPT-4o, but for autocomplete-style assistance and routine code generation, it works.
Workflow Fit
Choose OpenCode when: you want model flexibility, work across multiple languages and frameworks, need to keep code local, or already maintain infrastructure for self-hosted models. OpenCode is also the natural choice if you are building internal tooling around an AI coding agent, since you can extend it directly.
Choose Claude Code when: you work on large, complex codebases where deep reasoning across many files is the bottleneck. The 1M context window and Agent Teams feature for multi-agent workflows do not have equivalents in OpenCode.
Choose Cursor when: you spend most of your time editing code in a visual editor and want AI to predict your next keystroke. Cursor’s inline completions and codebase-aware suggestions are faster for day-to-day editing than any terminal-based tool.
Setting Up OpenCode in 5 Minutes
Installation is as simple as the architecture suggests:
# macOS / Linux
curl -fsSL https://opencode.ai/install | bash
# Or with Go
go install github.com/anomalyco/opencode@latest
Configure your provider:
# ~/.opencode/config.yaml
provider: anthropic
api_key: sk-ant-...
model: claude-sonnet-4-20250514
Or point it at a local model:
provider: ollama
model: llama3.1:70b
endpoint: http://localhost:11434
Then start it in your project directory:
cd your-project
opencode
The TUI loads, indexes your project, and you are coding. No account creation, no credit card, no onboarding wizard.
For teams, the zero-infrastructure requirement is the key selling point. Every developer installs the binary, points it at the company’s approved LLM endpoint (whether that is Azure OpenAI, AWS Bedrock, or a self-hosted model), and starts working. No per-seat licensing negotiations. No vendor security reviews beyond the LLM provider you already approved.
What OpenCode Gets Wrong
No tool survives an honest review without criticism.
No multi-agent orchestration. Claude Code can spawn sub-agents that work on different parts of a change simultaneously. OpenCode runs a single agent session. For large refactors touching dozens of files, this means slower iteration.
Quality depends on your model choice. This is the flip side of provider freedom. A developer running Mistral 7B locally will have a fundamentally different experience than one running Claude Opus via API. OpenCode’s reviews and tutorials rarely acknowledge this variance, which sets wrong expectations.
Smaller ecosystem. Cursor has a marketplace of extensions. Claude Code has official integrations with GitHub, Linear, and other developer tools. OpenCode’s plugin ecosystem is growing (the awesome-opencode list tracks community projects), but it is thinner.
No cloud agent option. Both Cursor and Claude Code offer cloud-hosted agents that run in sandboxed environments. OpenCode runs exclusively on your machine. For some teams, this is a feature. For others, it means they cannot offload long-running agent tasks.
Who Should Actually Switch
If you are already productive with Cursor or Claude Code, switching to OpenCode for the sake of open-source purity is not a good trade. Productivity matters more than principles.
Switch if: you are paying for multiple AI coding subscriptions and want to consolidate. OpenCode with a good API key replaces 80% of what you use Cursor and Claude Code for, at a lower total cost.
Switch if: your company requires that code never leaves your network. OpenCode + self-hosted model is the only coding agent setup that achieves this without significant compromise.
Switch if: you are a team lead evaluating AI coding tools and do not want to lock your team into a vendor. OpenCode lets you start with one provider and switch later without retraining anyone.
Stay where you are if: you rely on Cursor’s inline completions (OpenCode has no IDE integration) or Claude Code’s 1M context window for whole-codebase reasoning. Those are genuine capabilities that OpenCode’s architecture cannot replicate.
Frequently Asked Questions
Is OpenCode really free?
OpenCode itself is completely free and open-source. You need to provide your own LLM API key (OpenAI, Anthropic, etc.) which has its own costs, or use a free local model through Ollama or llama.cpp. The tool costs nothing; the AI provider behind it may cost something.
How does OpenCode compare to Cursor for everyday coding?
OpenCode is a terminal-based agent focused on agentic tasks like multi-file edits, debugging, and test iteration. Cursor is a full IDE with inline completions and visual diffs. For moment-to-moment code editing, Cursor is faster. For complex, multi-step changes across a codebase, OpenCode with a strong model is equally capable at a lower cost.
Can I use OpenCode with local models offline?
Yes. OpenCode supports Ollama, llama.cpp, and LM Studio for fully local, offline AI coding. Quality depends on the model you run. For best results with local models, use at least a 70B parameter model like Llama 3.1 70B if your hardware supports it.
Why does OpenCode have more GitHub stars than Cursor?
OpenCode’s 120K+ stars reflect the open-source community’s demand for a provider-agnostic coding agent. Cursor is a commercial product with a closed-source codebase, so its popularity is measured in paying subscribers rather than GitHub stars. Stars indicate community enthusiasm, not necessarily that one tool is better than the other.
What LLM providers does OpenCode support?
OpenCode supports 75+ providers including OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, OpenRouter, and local models through Ollama, llama.cpp, and LM Studio. You can switch providers by changing a single line in your config file.
