LobeHub is an open-source AI agent workspace with 74,000 GitHub stars, 5,600+ community-built agents, and native MCP support that turns it into a hub for tools, models, and multi-agent collaboration. It started life as a polished ChatGPT UI alternative. Version 2.0 repositioned it as something more ambitious: a self-hosted “agent operating system” where you assemble AI teammates, connect them to your tools via MCP, and orchestrate them in groups with a supervisor agent.
That pitch sounds like every other open-source AI project in 2026. What makes LobeHub worth paying attention to is that the execution backs it up. The UI is genuinely better than most commercial products. The agent marketplace has real depth. And the architecture handles everything from a single Docker container to a full enterprise deployment with SSO, RBAC, and PostgreSQL-backed persistence.
Not Another ChatGPT Clone: What LobeHub Actually Does
The first thing most people see when they open LobeHub is a chat interface. That is both its strength and its biggest misconception. Yes, you can use it as a ChatGPT/Claude UI that routes to any model provider. But treating it as “just a chat wrapper” misses the point entirely.
LobeHub is built on Next.js 16 with a React 19 frontend, PostgreSQL + PGVector for storage, and tRPC for the API layer. The tech stack matters because it means LobeHub is a real web application, not a thin wrapper around API calls. It persists conversations, manages user sessions with proper auth (OAuth, WebAuthn, TOTP 2FA, OIDC/SSO via Better-Auth), and stores embeddings for its built-in RAG knowledge base.
Three features separate LobeHub from simpler chat UIs like Open WebUI or LibreChat:
Agent marketplace with 5,600+ agents. These are not just system prompt templates. Each agent in the LobeHub marketplace includes a defined persona, tool configurations, model preferences, and behavioral instructions. Categories span programming, academic research, copywriting, marketing, translation, and dozens more. Anyone can submit agents via GitHub PR, and an automated i18n pipeline translates them across languages.
Native MCP integration. The MCP marketplace lets you install MCP servers with one click: databases, APIs, web scrapers, file systems, and specialized tools. LobeHub claims access to 10,000+ skills through MCP-compatible integrations. This matters because MCP has become the standard protocol for connecting agents to external tools, and LobeHub adopted it early and aggressively.
Artifacts rendering. Like Anthropic’s Claude artifacts, LobeHub can render HTML, SVG, and code outputs live in the conversation. Ask it to build a dashboard mockup and it renders the HTML right there. This is not unique anymore, but the implementation is smooth.
50+ Model Providers: The Most Model-Agnostic Open-Source Option
LobeHub supports over 50 AI model providers through a unified abstraction layer. That is not a typo. The list includes OpenAI (GPT-4o, o1, o3 reasoning models), Anthropic Claude (3.5 through 4.6), Google Gemini, AWS Bedrock, DeepSeek, Groq, Together AI, Perplexity, Mistral, Azure OpenAI, Moonshot, ZhipuAI, Qwen, and many more.
For local model users, LobeHub has first-class Ollama integration. Point it at your Ollama instance and every local model appears in the model selector. It also supports vLLM as a backend for teams that need higher throughput.
The model abstraction handles capability detection automatically. Vision models get image upload support. Reasoning models (o1, o3) get chain-of-thought visualization. Function-calling models get tool access. You do not configure this per model; LobeHub reads the model’s capability flags and adjusts the UI accordingly.
This matters for teams that do not want to be locked into a single provider. You can run Claude for complex reasoning, GPT-4o for quick tasks, and a local Llama model for sensitive data processing, all within the same workspace, switching between them per conversation or per message.
Multi-Agent Groups: Supervisor Orchestration in a Chat Interface
The v2.0 feature that signals LobeHub’s ambitions beyond “chat UI” is Agent Groups. You assemble multiple specialized agents into a group, assign a supervisor agent, and the supervisor orchestrates work by delegating subtasks, collecting results, and managing the conversation flow.
This is multi-agent orchestration packaged for non-developers. In frameworks like LangGraph or CrewAI, you write code to define agent roles, routing logic, and state management. In LobeHub, you pick agents from the marketplace, drop them into a group, and the supervisor handles coordination.
A practical example: create a group with a Research Agent (configured with web search MCP tools), an Analyst Agent (with data processing skills), and a Writer Agent (with your brand voice instructions). Ask the group to “research competitor pricing for our Q2 report,” and the supervisor delegates web research, passes findings to the analyst for structuring, then hands the structured data to the writer for the final output.
The orchestration is not as flexible as code-based frameworks. You cannot define custom routing graphs or implement complex error recovery patterns. But for 80% of multi-agent use cases, the visual group builder eliminates the need to write orchestration code at all.
The Memory System
LobeHub v2.0 includes a personal memory system that tracks six dimensions: activities, contexts, experiences, identities, preferences, and personas. The system learns from your interactions over time, building a profile that influences how agents respond to you.
This is not the same as conversation history. It is persistent memory that survives across sessions and applies across agents. If you tell one agent you prefer concise responses, the memory system can surface that preference to other agents in the workspace. The implementation uses PostgreSQL-backed storage with vector embeddings for semantic retrieval.
Self-Hosting: Docker, Desktop, or Vercel
LobeHub offers multiple deployment paths, and the self-hosted community edition is completely free under the Apache 2.0 license.
Docker Compose is the production path. The full stack includes PostgreSQL (with PGVector for RAG), MinIO or RustFS for file storage, Redis for caching, and optionally Casdoor for SSO. The Docker Compose configuration supports three modes: localhost development, LAN access via port-based routing, and custom domain deployment with HTTPS.
Desktop app (Electron) runs LobeHub as a native application with PGlite WASM, meaning the entire database runs in-browser. No server needed. This is useful for individual users who want a local AI workspace without managing infrastructure.
Vercel deployment works for lightweight setups using Vercel’s managed PostgreSQL and Blob storage. One-click deploy from the repo. This is the fastest way to get a hosted instance, but it trades the full Docker feature set for simplicity.
For enterprise teams, the Docker Compose stack optionally includes Grafana + Prometheus + Tempo for monitoring, giving you observability into model usage, response times, and error rates across the workspace.
The self-hosted approach matters for data sovereignty. Your conversations, knowledge bases, and agent configurations stay on your infrastructure. No data leaves your network unless you route to cloud model APIs, and even that is optional if you run local models through Ollama.
Where LobeHub Fits: Not Dify, Not Langflow, Not Open WebUI
The natural comparison points are Dify (125K stars) and Langflow (143K stars), but LobeHub occupies a different category.
Dify is an LLMOps platform. It gives you a workflow canvas, RAG pipeline builder, model management, and API deployment. You build AI backends with Dify. Langflow is a Python-native visual builder where every component is editable source code. You build agent pipelines with Langflow.
LobeHub is the user-facing layer. It is where people interact with agents, not where they build agent backends. Many teams use Dify or Langflow to construct the AI logic and LobeHub as the polished interface their team actually uses day to day.
Compared to Open WebUI (another popular self-hosted chat UI), LobeHub is heavier but more capable. Open WebUI is a clean, simple Ollama frontend. LobeHub is a full workspace with agents, MCP tools, knowledge bases, memory, and team collaboration. If you want a lightweight local chat UI, Open WebUI is simpler. If you want a platform your whole team uses with shared agents and tool integrations, LobeHub is the better fit.
| Feature | LobeHub | Dify | Langflow | Open WebUI |
|---|---|---|---|---|
| Primary role | Agent workspace/UI | LLMOps backend | Pipeline builder | Chat UI |
| GitHub stars | 74K | 125K | 143K | 80K+ |
| Agent marketplace | 5,600+ | Templates | Components | Community |
| MCP support | Native marketplace | API tools | Components | Limited |
| Multi-agent groups | Yes (supervisor) | Workflow nodes | Graph-based | No |
| Self-hosted | Docker/Desktop/Vercel | Docker | Docker/pip | Docker |
| License | Apache 2.0 | Custom open-source | MIT | MIT |
Getting Started: The Five-Minute Path
The fastest way to try LobeHub is the Docker one-liner:
docker run -d -p 3210:3210 lobehub/lobe-chat
That gives you a single-container instance with client-side storage (no PostgreSQL). Good enough to explore the UI, test agent configurations, and connect your API keys.
For a persistent setup with database-backed storage, knowledge base support, and multi-user auth, use the Docker Compose stack:
git clone https://github.com/lobehub/lobe-chat.git
cd lobe-chat/docker-compose
cp .env.example .env
# Edit .env with your model API keys
docker compose up -d
The .env file is where you configure model provider API keys, auth settings, and storage options. The compose stack handles PostgreSQL, Redis, and MinIO automatically.
Once running, the first thing to do is explore the Agent Marketplace. Install a few agents relevant to your work, connect MCP tools for your common integrations, and try an Agent Group to see the orchestration in action.
Frequently Asked Questions
What is LobeHub and how does it differ from ChatGPT?
LobeHub is an open-source AI agent workspace with 74,000 GitHub stars. Unlike ChatGPT, it supports 50+ model providers (OpenAI, Anthropic, Google, local Ollama models), has a marketplace with 5,600+ community agents, native MCP tool integration, and multi-agent group collaboration. You can self-host it for full data sovereignty.
Is LobeHub free to self-host?
Yes. LobeHub’s community edition is free under the Apache 2.0 license. You can deploy it via Docker, Docker Compose, the Electron desktop app, or Vercel. You bring your own API keys for model providers, or use free local models through Ollama.
How does LobeHub compare to Dify and Langflow?
LobeHub is a user-facing agent workspace (where people interact with AI agents). Dify is an LLMOps backend for building AI workflows. Langflow is a Python-native visual pipeline builder. Many teams use Dify or Langflow for the AI backend and LobeHub as the polished frontend their team uses daily.
Does LobeHub support MCP (Model Context Protocol)?
Yes. LobeHub has native MCP support with a one-click MCP marketplace. You can install MCP servers for databases, APIs, web scrapers, file systems, and more. The platform claims access to 10,000+ skills through MCP-compatible integrations.
Can LobeHub run with local AI models?
Yes. LobeHub has first-class Ollama integration for local model serving. Point it at your Ollama instance and all local models appear in the model selector. It also supports vLLM for higher-throughput production deployments. This means your data never leaves your network.
