Photo by Shubham Dhage on Unsplash Source

Google ADK v2.0.0a1, released on March 18, 2026, introduces a graph-based workflow runtime that fundamentally changes how you build deterministic agent pipelines. The new Workflow class lets you define execution graphs with conditional routing, parallel fan-out/fan-in, loops, retry mechanisms, and human-in-the-loop steps. If you picked LangGraph over ADK specifically because ADK lacked fine-grained workflow control, that calculus just shifted.

This is not a minor version bump. ADK v1.x gave you three rigid workflow types: SequentialAgent, ParallelAgent, and LoopAgent. Useful, but limiting. V2’s Workflow class replaces all three with a single, composable graph abstraction that handles everything from simple pipelines to complex conditional branching with nested sub-workflows.

Related: Google ADK: The Agent Framework with Native MCP and A2A

The Workflow Class: Graphs, Not Just Sequences

The core of ADK v2 is the Workflow class. You define nodes (agents, functions, or tools) and connect them with edges that describe the execution graph. Here is a simple sequential pipeline:

from google.adk.workflow import Workflow

root_agent = Workflow(
    name="root_agent",
    edges=[
        ("START", city_generator_agent, lookup_time_function,
         city_report_agent, completed_message_function)
    ],
)

That looks like the old SequentialAgent, and for simple cases it works the same way. The difference shows up when you need conditional routing. Instead of an LLM deciding which agent runs next, you define explicit routes in code:

def router(node_input: str):
    return Event(route="RUN_TASK_C")

root_agent = Workflow(
    name="routing_workflow",
    edges=[
        ("START", task_A_node, router),
        (router, {
            "RUN_TASK_B": task_B_node,
            "RUN_TASK_C": task_C_node,
        }),
    ],
)

The router function returns an Event with a route value. The Workflow class matches that value against the dictionary of downstream nodes and executes the right one. No LLM inference needed for the routing decision, which means faster execution and deterministic behavior.

This is the same pattern LangGraph uses with conditional edges, but the API is arguably cleaner. Where LangGraph requires you to define a StateGraph, add nodes, add edges with conditions, and compile, ADK does it in a single edges declaration.

Fan-Out/Fan-In: The JoinNode Pattern

Parallel execution in ADK v1 meant using ParallelAgent, which ran all sub-agents and returned when all completed. V2 introduces JoinNode for proper fan-out/fan-in patterns within a graph:

from google.adk.workflow import JoinNode, Workflow

my_join_node = JoinNode(name="my_join_node")

root_agent = Workflow(
    name="parallel_workflow",
    edges=[
        ("START", parallel_task_A, my_join_node),
        ("START", parallel_task_B, my_join_node),
        ("START", parallel_task_C, my_join_node),
        (my_join_node, final_task_D),
    ],
)

Three tasks start from START, run concurrently, and JoinNode waits for all of them to complete before final_task_D executes. You can combine this with conditional routing to create sophisticated pipelines: fan out to three analysis agents, join results, route to different processors based on findings.

One constraint to know: all parallel nodes must produce Event output or the workflow stalls. And you cannot run multiple interactive chat sessions in parallel within one agent session. For batch processing and analysis pipelines, though, this pattern is exactly what was missing from ADK v1.

Dynamic Workflows: When Graphs Are Not Enough

Static graph definitions handle most cases, but sometimes you need the execution flow itself to be determined at runtime. ADK v2’s dynamic workflows use the @node decorator and ctx.run_node() to build execution sequences programmatically:

from google.adk.workflow import node, Workflow, Context

@node(rerun_on_resume=True)
async def editorial_workflow(ctx: Context, user_request: str):
    raw_draft = await ctx.run_node(draft_agent, user_request)
    formatted_text = await ctx.run_node(format_function_node, raw_draft)
    return formatted_text

root_agent = Workflow(
    name="root_agent",
    edges=[("START", editorial_workflow)]
)

The key difference from static graphs: you use standard Python control flow. Loops become while loops. Conditionals become if statements. Parallel execution uses asyncio.gather(). This is closer to how you would write a regular async Python program, which means less framework-specific learning.

ADK v2 also adds automatic checkpointing to dynamic workflows. If a workflow pauses (for human input, for example) and later resumes, completed nodes are skipped automatically. You do not need to build resume logic yourself.

Here is a concrete example of an iterative refinement loop:

@node
async def code_workflow(ctx: Context):
    code = await ctx.run_node(coder_agent)
    check_resp = await ctx.run_node(compile_lint_check, code)

    while check_resp.findings:
        yield Event(state={"code": code, "findings": check_resp.findings})
        code = await ctx.run_node(fixer_agent)
        check_resp = await ctx.run_node(compile_lint_check, code)

    return code

Write code, check it, fix issues, repeat. The while loop runs until the linter is satisfied. Try expressing that in a static graph definition and you will appreciate the flexibility.

Related: Multi-Agent Orchestration: How AI Agents Work Together

Task API: Structured Agent Delegation

The second major feature in v2 is the Task API, which formalizes how agents delegate work to sub-agents. ADK v1 used LLM-driven routing where a parent agent read child descriptions and decided who handles what. V2 adds three explicit delegation modes:

Chat mode (default): The sub-agent gets full user interaction and must explicitly transfer control back to the parent. This is how ADK v1 already worked.

Task mode: The sub-agent can ask clarification questions but automatically returns to the parent when the task completes. Task mode agents operate in isolated session branches, so they only see their own events.

Single-turn mode: The sub-agent processes one input, produces one output, and returns. No user interaction allowed. This mode supports parallel execution because multiple single-turn agents can run simultaneously without interfering with each other.

from google.adk.workflow.agents.llm_agent import Agent

weather_agent = Agent(
    name="weather_checker",
    mode="single_turn",
    tools=[get_weather, user_info, geocode_address],
)

flight_agent = Agent(
    name="flight_booker",
    mode="task",
    tools=[search_flights, book_flight],
)

root = Agent(
    name="travel_planner",
    sub_agents=[weather_agent, flight_agent],
)

The practical value: you can now run a weather check and a hotel search in parallel (both single-turn), then hand off to a flight booking agent (task mode) that can ask the user for date preferences. In ADK v1, the orchestrator had to handle all of this sequentially through LLM-driven routing.

One limitation: task mode agents must be leaf agents with no sub-agents of their own. You cannot nest task delegation. For complex hierarchies, you would use a Workflow with nested sub-workflows instead.

ADK v2 vs. LangGraph: Who Leads Now?

Our existing ADK overview listed “no visual graph editor” and “less fine-grained control over execution flow” as ADK’s weaknesses compared to LangGraph. The v2 alpha directly addresses the control flow gap. Here is how they compare now:

Where ADK v2 matches LangGraph:

  • Graph-based workflow definition with conditional routing
  • Fan-out/fan-in parallel execution patterns
  • Human-in-the-loop support with workflow pause/resume
  • State management across workflow nodes
  • Loop and retry mechanisms

Where ADK v2 arguably beats LangGraph:

  • Dynamic workflows using native Python async/await instead of graph-specific DSL
  • Built-in checkpointing that skips completed nodes on resume
  • Task API provides structured delegation modes (chat/task/single-turn) that LangGraph does not have natively
  • Native MCP and A2A support still unmatched by any other framework
  • Four language SDKs (Python, TypeScript, Go, Java) vs. LangGraph’s two (Python, JavaScript)

Where LangGraph still leads:

  • Production maturity. LangGraph has been in production at scale for over two years. ADK v2 is an alpha release.
  • Observability via LangSmith remains the most comprehensive agent debugging tool in the ecosystem
  • Larger community with more tutorials, examples, and Stack Overflow answers
  • Framework-agnostic by design. ADK still leans toward the Google ecosystem for production deployment, even though it runs anywhere

The honest assessment: if you are starting a new project today and want graph-based agent workflows, ADK v2 is worth evaluating alongside LangGraph. If you are running agents in production, LangGraph remains the safer bet until ADK v2 reaches stable release.

Related: AI Agent Frameworks Compared: LangGraph, CrewAI, AutoGen

Migration from ADK v1: What Changes

If you are already using ADK v1, the migration path is incremental. V2 is an alpha release (pip install google-adk==2.0.0a1), and the existing v1 agent types (LlmAgent, SequentialAgent, ParallelAgent, LoopAgent) still work. You do not need to rewrite everything at once.

The practical migration strategy:

  1. Keep existing agents as-is. Your LlmAgent definitions with MCP tools and A2A connections do not change.
  2. Replace SequentialAgent with Workflow edges for pipelines that need conditional routing or fan-out.
  3. Replace ParallelAgent with JoinNode patterns when you need to combine results from parallel branches before proceeding.
  4. Add Task API modes to sub-agents that currently rely on LLM-driven routing for structured delegation.
  5. Use dynamic workflows for iterative processes that were awkward to express with the old LoopAgent.

Since this is an alpha release, expect breaking changes before the stable v2.0.0 ships. Pin your dependency version and test thoroughly. For production systems, stay on v1.27.x (the latest stable as of March 2026) and experiment with v2 in development environments.

What This Means for the Framework Landscape

Google shipping a graph-based workflow runtime is a statement: the hierarchy-only approach was a deliberate design choice, not a limitation. ADK v1 bet that most agent systems are better modeled as hierarchies than graphs. ADK v2 says “but we will give you graphs too, for when you need them.”

This puts pressure on every other framework. LangGraph’s main differentiator was workflow control, and ADK v2 narrows that gap while keeping its own advantages (native MCP/A2A, four language SDKs, Vertex AI integration). CrewAI and the OpenAI Agents SDK still lack comparable graph-based workflow primitives entirely.

For teams choosing a framework in 2026, the decision matrix just got more complicated. ADK v2 is the first framework that credibly covers all three orchestration styles: LLM-driven routing (for flexible tasks), graph-based workflows (for deterministic pipelines), and dynamic workflows (for runtime-determined execution). Whether that breadth translates to depth remains to be proven in production.

Frequently Asked Questions

What is new in Google ADK v2.0 alpha?

ADK v2.0.0a1 introduces two major features: a graph-based workflow runtime with routing, fan-out/fan-in, loops, retry, and human-in-the-loop support, and a Task API for structured agent-to-agent delegation with chat, task, and single-turn modes.

How does Google ADK v2 compare to LangGraph?

ADK v2 now matches LangGraph on graph-based workflow control with conditional routing and parallel execution. ADK adds native MCP/A2A support and dynamic workflows using native Python async/await. LangGraph still leads in production maturity, observability via LangSmith, and community size.

Can I use ADK v2.0 alpha in production?

ADK v2.0.0a1 is an alpha release and not recommended for production use. Expect breaking changes before the stable v2.0.0 ships. For production systems, stay on ADK v1.27.x and experiment with v2 in development environments.

How do I migrate from ADK v1 to v2?

Migration is incremental. Existing LlmAgent definitions still work in v2. You can gradually replace SequentialAgent with Workflow edges, ParallelAgent with JoinNode patterns, and add Task API modes to sub-agents. Install the alpha with pip install google-adk==2.0.0a1.

What is the ADK Workflow class?

The Workflow class is ADK v2’s graph-based execution engine. You define nodes (agents, functions, or tools) and connect them with edges to create execution graphs. It supports sequential routes, conditional branching via route values, and parallel fan-out/fan-in via JoinNode.