Three lines of Python. That is how much code it takes to build a working AI agent with AWS Strands Agents. You define a prompt, hand the SDK a list of tools, and the LLM figures out the rest: which tools to call, in what order, and when to stop. No graphs, no state machines, no orchestration boilerplate. Amazon calls this “model-driven” agent development, and it is already running in production inside Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer.
Since its open-source release in May 2025, Strands has crossed 14 million PyPI downloads, hit 5,400 GitHub stars, and attracted contributions from Anthropic, Meta, PwC, and Accenture. The 1.0 release added multi-agent orchestration primitives. The latest version (1.30.0 as of March 2026) ships with 13+ model providers and 20+ built-in tools.
What “Model-Driven” Actually Means
Most agent frameworks ask you to define the workflow. LangGraph has you draw directed graphs with nodes, edges, and conditional routing. CrewAI wants you to specify crew hierarchies and task dependencies. Strands flips the control: you describe the agent’s role and capabilities, then the model decides what to do.
The agent loop is simple. The LLM receives the system prompt, conversation history, and descriptions of available tools. It reasons about the task, optionally selects tools to call, and produces a response. Strands executes the tool calls, feeds the results back into the model’s context, and repeats. The loop continues until the model produces a final answer without requesting more tool calls.
from strands import Agent
from strands_tools import calculator, http_request
agent = Agent(tools=[calculator, http_request])
agent("What is the current BTC price divided by the price of gold per ounce?")
That is a complete, working agent. The model decides it needs to fetch prices (using http_request), do math (using calculator), and compose an answer. You did not specify any of that workflow.
Custom Tools in Four Lines
Defining your own tools uses a decorator pattern:
from strands import Agent, tool
@tool
def letter_counter(word: str, letter: str) -> int:
"""Count occurrences of a specific letter in a word."""
return word.lower().count(letter.lower())
agent = Agent(tools=[letter_counter])
agent("How many r's in strawberry?")
The @tool decorator extracts the function signature and docstring, converts them into the tool schema the model needs, and handles invocation automatically. No JSON schemas to write by hand, no tool registration boilerplate.
Production Track Record Inside AWS
Strands was not built in a lab and then released. It was extracted from production systems already running inside AWS. That matters because it means the framework’s design decisions were shaped by real operational constraints, not hypothetical use cases.
Amazon Q Developer is the flagship. AWS’s AI coding assistant uses Strands for its agent capabilities. Before Strands, Q Developer teams measured new agent deployments in months. After switching, they shipped new agents in days to weeks. The Q CLI was reportedly built in three weeks using Strands.
VPC Reachability Analyzer uses Strands-powered agents for network connectivity investigations. AWS reports that investigation time dropped from 30 minutes to 45 seconds. That is a 40x improvement, and it speaks to the efficiency of letting the model drive tool selection rather than hardcoding diagnostic workflows.
AWS Glue integrated Strands for its AI agent features in data integration pipelines, though AWS has shared fewer specifics about this implementation.
An external organization (unnamed in AWS’s blog posts) reported building a production-ready agentic solution in 10 days using Strands, compared to months with traditional approaches.
Multi-Agent Orchestration Since 1.0
The July 2025 release of Strands 1.0 introduced four multi-agent primitives that cover most production patterns:
Agents-as-Tools is the simplest pattern. You create specialized agents and register them as tools for an orchestrator agent. The orchestrator decides when to delegate work to a specialist. This is similar to OpenAI’s Agent-as-Tool pattern and works well for hierarchical task decomposition.
Handoffs let one agent transfer control to another agent (or a human) with full context preservation. The receiving agent picks up where the first left off. Useful for escalation workflows where a general-purpose agent hits its limits and hands off to a domain expert.
Swarms allow multiple agents to coordinate autonomously through shared working memory. Each agent reads from and writes to a shared state, deciding independently when to act. This is the most flexible pattern but also the hardest to debug.
Graphs bring back explicit workflow control when you need it. You define nodes, edges, conditional routing, and quality gates. If you need deterministic execution paths for compliance or auditing, this is your escape hatch from the model-driven default.
How Strands Compares to the Competition
The agent framework space is crowded. Here is where Strands fits relative to the alternatives that matter.
Strands vs. LangGraph
LangGraph gives you fine-grained control through explicit graph definitions. Every node is a function, every edge is a routing decision you make. This is powerful for complex, deterministic workflows but comes with a steeper learning curve and more boilerplate.
Strands optimizes for speed to first working agent. You trade explicit control for simplicity. For 80% of agent use cases where the model’s judgment is good enough, Strands gets you there faster. For the other 20% where you need guaranteed execution paths, LangGraph’s graph model is harder to beat.
Strands vs. OpenAI Agents SDK
Both frameworks share the “simple API, model handles orchestration” philosophy. The critical difference: OpenAI’s SDK is tightly coupled to OpenAI models. Strands supports 13+ providers including Bedrock, Anthropic, OpenAI, Ollama, LiteLLM, Google Gemini, and local inference through Llama.cpp.
OpenAI’s SDK has built-in guardrails validation. Strands has native MCP (Model Context Protocol) support, giving it access to thousands of external tools through a standardized interface. Both support agent-as-tool and handoff patterns.
Strands vs. CrewAI
CrewAI’s “agents and crews” metaphor is beginner-friendly, but the framework has a paid enterprise control plane. Strands is Apache 2.0 with no proprietary components. CrewAI is more cloud-agnostic; Strands has deeper AWS integration (Bedrock, Lambda, EKS deployment support, AgentCore compatibility).
The Model Provider Ecosystem
Strands ships with built-in support for a wide range of model providers, which is its strongest competitive advantage against the OpenAI SDK:
- AWS Bedrock (Claude, Llama, Mistral, and other models hosted on Bedrock)
- Anthropic (direct API, contributed by Anthropic themselves)
- OpenAI (GPT-4o, o3, etc.)
- Meta Llama API (contributed by Meta)
- Google Gemini, Cohere, Mistral, xAI
- Ollama, LiteLLM, vLLM, Llama.cpp (local/self-hosted)
- NVIDIA NIM, SageMaker, SGLang (enterprise/research)
Switching between providers is a configuration change, not a code rewrite:
from strands import Agent
from strands.models import BedrockModel
model = BedrockModel(
model_id="anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-west-2",
temperature=0.3,
)
agent = Agent(model=model)
Native MCP integration means any MCP-compatible tool server works out of the box. If you are already using MCP tool servers from other projects, you can plug them directly into a Strands agent.
Where Strands Falls Short
No framework is perfect. Here is what you should know before committing.
Model quality dependency. The model-driven approach means your agent is only as good as the LLM powering it. Weaker models produce agents that pick the wrong tools, loop unnecessarily, or fail to complete multi-step tasks. Strands works best with Claude 3.5 Sonnet or better. Using smaller models for cost savings often means worse agent behavior, and you have limited ability to compensate through framework-level controls.
AWS credential friction. Despite being model-agnostic, the default configuration assumes AWS credentials. Developers not in the AWS ecosystem report friction getting started, since the documentation leads with Bedrock setup. Switching to a different provider requires knowing to look for the model provider configuration options.
Tool description sensitivity. When two tools have overlapping descriptions, the model can consistently pick the wrong one. You need to write precise, non-overlapping tool descriptions. This is not unique to Strands (it affects all agent frameworks), but the model-driven approach gives you fewer levers to fix it compared to explicit routing in LangGraph.
No built-in learning. Agents cannot learn from past executions. Every session starts fresh. There is a GitHub feature request (#923) for continuous learning, but it is not on the near-term roadmap.
Younger ecosystem. At under a year old, the community and third-party tooling around Strands is still growing compared to LangGraph (which has years of ecosystem development) and even CrewAI. The experimental features (Steering, bidirectional streaming) have APIs subject to change.
Who Should Use Strands
Strands is the right choice if you are building agents on AWS infrastructure, want model-agnostic flexibility with minimal boilerplate, or need to get from prototype to production quickly. The Q Developer team’s experience (months to days) is representative of the speed advantage.
It is the wrong choice if you need deterministic, auditable execution paths for every request (use LangGraph), if you want a managed enterprise platform with a GUI (look at CrewAI Enterprise or Bedrock AgentCore), or if your agents require learning from past sessions.
For teams already using other agent frameworks, the migration path is straightforward: Strands can wrap existing tools and most patterns translate directly. The Apache 2.0 license and active contribution from Anthropic and Meta suggest the project has staying power beyond a single-company effort.
Frequently Asked Questions
What is AWS Strands Agents SDK?
Strands Agents is an open-source, Apache 2.0 licensed SDK from AWS for building AI agents. It uses a model-driven approach where developers define a prompt and a list of tools, and the LLM handles planning, tool selection, and execution autonomously. It is used in production by Amazon Q Developer, AWS Glue, and VPC Reachability Analyzer.
How does Strands Agents compare to LangGraph?
LangGraph requires explicit graph definitions with nodes and edges for agent workflows, giving more deterministic control. Strands uses a model-driven approach where the LLM decides the workflow at runtime. Strands is faster to prototype with but offers less explicit control. LangGraph is better for complex, deterministic workflows requiring guaranteed execution paths.
Is Strands Agents only for AWS?
No. Despite being built by AWS, Strands Agents supports 13+ model providers including Anthropic, OpenAI, Google Gemini, Meta Llama, Ollama, and local inference options like Llama.cpp. It defaults to AWS Bedrock but can be configured to use any supported provider. It can also be deployed outside AWS on Docker, Kubernetes, or any server.
What programming languages does Strands Agents support?
Strands Agents has official SDKs for Python and TypeScript. The Python SDK has over 14 million downloads on PyPI and is the more mature of the two, while the TypeScript SDK has around 526 GitHub stars and is growing.
Can Strands Agents handle multi-agent orchestration?
Yes. Since version 1.0 (July 2025), Strands supports four multi-agent patterns: Agents-as-Tools (specialized agents as callable tools), Handoffs (context-preserving transfers between agents), Swarms (autonomous coordination via shared memory), and Graphs (explicit workflow definitions with conditional routing).
