Google’s Agent Development Kit (ADK) is the first major framework that treats both MCP and A2A as native primitives instead of community-maintained plugins. With 17,600+ GitHub stars, 236 contributors, SDKs in Python, TypeScript, Go, and Java, and a v1.0 stable release already shipped, ADK has moved from Google I/O demo to production toolkit in under a year. Renault Group, Box, and Revionics already run agents built on it.
What makes ADK different from LangGraph, CrewAI, and the rest is not just Google backing. It is a specific architectural bet: that agent-to-tool communication (MCP) and agent-to-agent communication (A2A) belong in the same framework rather than being bolted on later. Whether that bet pays off depends on what you are building.
What ADK Actually Is (and Is Not)
ADK is an open-source, code-first framework for building single-agent and multi-agent systems. Apache 2.0 licensed, model-agnostic (though optimized for Gemini), and deployable anywhere from a laptop to Vertex AI Agent Engine.
The framework follows three principles that set it apart from the competition:
Agents as composable units. Every agent in ADK can function standalone or be plugged into a larger hierarchy. A parent agent delegates to child agents based on their descriptions, and those child agents can themselves have children. You compose agent systems the way you compose functions: by nesting them.
Protocols as first-class features. MCP support means any MCP-compatible tool works out of the box. A2A support means agents built in different frameworks (or by different teams) can discover and collaborate with each other without custom integration code. No other major framework ships both natively.
Code over configuration. Agent logic, tools, and orchestration are defined in Python (or TypeScript, Go, Java) rather than YAML files or visual builders. This means you get version control, type checking, unit tests, and code review on your agent definitions.
Here is what a minimal ADK agent looks like in Python:
from google.adk.agents import LlmAgent
agent = LlmAgent(
model="gemini-2.0-flash",
name="research_assistant",
description="Answers research questions with cited sources",
instruction="You are a research assistant. Always cite your sources.",
tools=[google_search_tool, code_execution_tool]
)
That is a working agent. No boilerplate classes, no state graph definitions, no YAML manifests. The trade-off: you get less fine-grained control over execution flow than LangGraph gives you. More on that below.
Multi-Agent Architecture: Hierarchies, Not Flat Teams
CrewAI organizes agents into “crews” with roles. LangGraph composes them as nodes in a state graph. ADK uses a hierarchy where parent agents route tasks to specialized sub-agents.
The routing is LLM-driven by default. When a user sends a message, the parent agent reads each child’s description field and decides which sub-agent should handle the request. If no description matches, the parent handles it directly.
from google.adk.agents import LlmAgent
greeter = LlmAgent(
model="gemini-2.0-flash",
name="greeter",
description="Handles greetings and small talk"
)
researcher = LlmAgent(
model="gemini-2.0-flash",
name="researcher",
description="Answers factual questions using search",
tools=[google_search_tool]
)
coordinator = LlmAgent(
model="gemini-2.0-flash",
name="coordinator",
description="Routes user requests to specialized agents",
sub_agents=[greeter, researcher]
)
For deterministic workflows, ADK also offers three workflow agent types that do not use an LLM for routing:
- SequentialAgent: executes sub-agents in order (A, then B, then C)
- ParallelAgent: runs sub-agents concurrently and collects results
- LoopAgent: repeats a sub-agent until an exit condition is met
This dual approach (LLM-driven routing for flexible tasks, deterministic orchestration for predictable pipelines) is ADK’s strongest architectural decision. LangGraph can do both but requires you to build the routing logic yourself. CrewAI is mostly LLM-driven with limited deterministic control.
Native MCP: Every Tool Speaks the Same Language
The Model Context Protocol standardizes how agents access external tools and data sources. ADK treats MCP servers as plug-and-play components.
Connect to any MCP server with a few lines:
from google.adk.tools.mcp import MCPToolset
async with MCPToolset.from_server(
connection_params=StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"]
)
) as tools:
agent = LlmAgent(
model="gemini-2.0-flash",
name="github_agent",
tools=tools
)
That agent now has full access to GitHub through the official MCP server: reading repos, creating issues, reviewing PRs. The same pattern works for the 2,000+ MCP servers available as of February 2026, from Slack and PostgreSQL to Jira and Google Drive.
This is where ADK’s protocol-native approach shows its value. In LangGraph, you would install a separate langchain-mcp-adapters package, configure the connection, and wrap it in a LangChain tool. In CrewAI, MCP support arrived as a community plugin. In ADK, it is part of the core framework with the same API surface as built-in tools.
Native A2A: When Agents Need Other Agents
The Agent-to-Agent protocol solves a different problem than MCP. Where MCP connects agents to tools, A2A connects agents to other agents, even agents built on different frameworks by different teams.
ADK’s A2A support means you can:
- Expose any ADK agent as an A2A server that other agents (built in LangGraph, CrewAI, or anything else) can discover and call
- Connect to external A2A agents from within your ADK application
- Mix frameworks in the same system: your orchestrator in ADK, a specialized analysis agent in LangGraph, a data pipeline agent in CrewAI
This is not theoretical. The Google Codelabs A2A tutorial demonstrates a multi-agent system where ADK agents coordinate with external A2A-compatible agents for trip planning, restaurant booking, and real-time weather checks.
For enterprises running agents from multiple vendors, A2A support is not a nice-to-have. It is the difference between a collection of disconnected agents and an actual multi-agent system.
ADK vs. LangGraph vs. CrewAI: When to Pick What
The three frameworks solve overlapping but distinct problems. Here is where each one wins.
Pick Google ADK when:
- You need native MCP and A2A without adapter packages
- Your team already uses Google Cloud (Vertex AI, BigQuery, Cloud Run)
- You want multi-language support (Python, TypeScript, Go, Java)
- Your agents need to interoperate with agents built on other frameworks
- You prefer hierarchy-based agent orchestration over graph-based
Pick LangGraph when:
- You need fine-grained control over every state transition
- Your agent workflows have complex conditional branching and retry logic
- You want the most mature observability tooling (LangSmith)
- You are already invested in the LangChain ecosystem
Pick CrewAI when:
- Speed to prototype matters more than architectural control
- Your agent system maps naturally to team roles (researcher, writer, reviewer)
- You want the lowest barrier to entry
- You do not need deterministic workflow orchestration
One critical difference: ADK’s model-agnostic support extends beyond Gemini through LiteLLM integration, which means you can use Claude, GPT-4, Mistral, Llama, or any model LiteLLM supports. You are not locked into Google’s models, even though Gemini gets first-class optimization.
Deployment: From Laptop to Vertex AI
ADK offers four deployment paths:
Local development. Run adk web for a built-in web UI that shows agent interactions, tool calls, and session state in real time. The CLI also supports adk run for terminal-based testing and adk eval for running evaluation datasets.
Cloud Run. Containerize your agent with a Dockerfile and deploy to Cloud Run. Google provides a quickstart template that handles health checks, scaling, and session management.
Vertex AI Agent Engine. The fully managed option. Upload your agent code, and Google handles infrastructure, scaling, monitoring, and integration with 100+ pre-built connectors (Salesforce, SAP, ServiceNow). This is where enterprise teams with compliance requirements will land.
Docker/Kubernetes. For teams that need full infrastructure control, ADK agents run in standard Docker containers on any Kubernetes cluster, including GKE, EKS, or on-premises.
The deployment flexibility matters because it removes the “vendor lock-in” objection. You can start on Vertex AI for convenience and migrate to self-hosted Kubernetes if your requirements change.
Built-in Evaluation: Test Agents Like Software
ADK includes an evaluation framework that treats agent testing as a first-class concern rather than an afterthought. You define test cases as datasets and run them against your agent to measure response quality, tool usage accuracy, and multi-turn conversation coherence.
adk eval --agent my_agent --eval_set test_cases.json
This integrates with the Vertex AI Evaluation Service for automated scoring against custom rubrics. You can also define custom evaluators that check specific business logic: did the agent cite the correct source? Did it stay within the cost threshold? Did it hand off correctly to the right sub-agent?
For teams building production agent systems where non-deterministic outputs make traditional unit tests insufficient, built-in evaluation support eliminates the need to build a separate testing harness from scratch.
What ADK Gets Wrong (For Now)
ADK is not the right choice for every project:
Observability is still catching up. LangGraph has LangSmith. ADK has Vertex AI tracing and Cloud Logging, which works well in the Google ecosystem but is less flexible than a framework-agnostic observability solution. Teams using Langfuse or Arize Phoenix will need custom integration.
Community and ecosystem are younger. LangGraph has 24,000+ stars and years of LangChain ecosystem tooling behind it. ADK’s 17,600 stars and growing community are strong for a framework that launched in April 2025, but the number of tutorials, Stack Overflow answers, and community-built extensions is still smaller.
Session management is Vertex-centric. While ADK’s session handling works locally, the production-grade features (persistent sessions, cross-session memory, automatic state management) lean heavily on Vertex AI Agent Engine. Self-hosted deployments require more manual session management.
No visual graph editor. LangGraph’s graph-based model makes agent workflows visually inspectable. ADK’s hierarchy-based model is harder to visualize for complex multi-agent systems, though the adk web UI provides step-by-step execution tracing.
Frequently Asked Questions
What is Google ADK (Agent Development Kit)?
Google ADK is an open-source, code-first framework for building AI agents and multi-agent systems. It supports Python, TypeScript, Go, and Java, includes native MCP and A2A protocol support, and can deploy anywhere from a local machine to Vertex AI Agent Engine.
Is Google ADK free to use?
Yes. Google ADK is open-source under the Apache 2.0 license with no license fees or subscriptions. You install it with pip install google-adk and build locally. Vertex AI Agent Engine (the managed deployment option) has its own pricing, but the framework itself is free.
Can Google ADK use models other than Gemini?
Yes. While ADK is optimized for Gemini, it supports any model through LiteLLM integration, including Claude, GPT-4, Mistral, Llama, and models hosted on Vertex AI Model Garden. You are not locked into Google’s models.
How does Google ADK compare to LangGraph?
ADK uses hierarchy-based agent orchestration with native MCP and A2A support, while LangGraph uses graph-based state machines with more fine-grained control over execution flow. ADK is easier to get started with; LangGraph gives you more control over complex conditional workflows. ADK supports four languages; LangGraph supports Python and JavaScript.
Does Google ADK support MCP and A2A protocols?
Yes, and this is ADK’s key differentiator. Both MCP (for agent-to-tool communication) and A2A (for agent-to-agent communication) are built into the core framework, not added via external plugins. Any MCP server and any A2A-compatible agent can integrate with ADK out of the box.
