Photo by Unsplash Source

Amazon Bedrock AgentCore is the managed platform AWS built so enterprises can deploy AI agents without building their own runtime, memory layer, tool gateway, and identity system from scratch. It went generally available in October 2025, and it is the most comprehensive agent infrastructure offering from any hyperscaler right now. If you are running agents on AWS (or thinking about it), AgentCore is the service that replaces the “we’ll figure out deployment later” phase of your agent project.

The pitch is straightforward: you bring your agent code, written in any framework (LangGraph, CrewAI, Strands Agents, OpenAI Agents SDK, whatever), and AgentCore handles the runtime, tool access, memory, auth, and monitoring. No ECS clusters to configure, no custom memory stores to maintain, no OAuth flows to hand-roll for every third-party integration.

Related: AI Agent Frameworks Compared: LangGraph, CrewAI, AutoGen

The Five Pillars of AgentCore

AgentCore is not a single service. It is five tightly integrated services that can be used independently or together. Understanding what each one does is the fastest way to decide whether the platform fits your stack.

Runtime: Serverless Agent Execution

AgentCore Runtime is a serverless compute environment purpose-built for AI agents. Unlike Lambda (which caps at 15 minutes and penalizes cold starts), Runtime supports long-running agent sessions, true session isolation, and multimodal workloads. You deploy your agent code, and AWS handles scaling, security boundaries, and resource allocation.

The billing model is consumption-based: $0.0895 per vCPU-hour, calculated per second based on actual CPU usage and peak memory. I/O wait and idle time are free. For agents that spend most of their time waiting on LLM responses or API calls, this matters: you are not paying for the seconds your agent is blocked on a network request.

Runtime works with any Python agent framework. AWS’s own samples repository includes examples for LangGraph, CrewAI, LlamaIndex, and their own Strands Agents SDK. You can also deploy tools as standalone Runtime endpoints, so other agents or services can call them via API.

Gateway: MCP-Compatible Tool Access

AgentCore Gateway solves the problem that every enterprise agent team hits within the first month: connecting agents to internal tools and APIs securely. Gateway is a managed Model Context Protocol (MCP) server that converts your existing APIs, Lambda functions, and services into MCP-compatible tools that any agent can discover and invoke.

The key capability is credential management. Gateway handles OAuth ingress authorization and secure egress credential exchange, which means your agents can authenticate to Slack, Salesforce, Jira, or any OAuth-based service without you building a custom auth layer for each one. At $0.005 per 1,000 tool invocations, the cost is nearly negligible.

Related: MCP and A2A: The Protocols Making AI Agents Talk

Memory: Short-Term and Long-Term Context

AgentCore Memory provides two types of persistence. Short-term memory handles multi-turn conversation state within a session. Long-term memory persists across sessions, enabling agents to remember user preferences, learn from past interactions, and share context across multiple agents.

Pricing is $0.25 per 1,000 short-term memory events. For most workloads, that is a rounding error. The real value is not having to build and maintain a separate memory infrastructure. Teams that have tried rolling their own agent memory with Redis, Pinecone, or Postgres know how quickly the complexity spirals once you need cross-session retrieval, memory compaction, and multi-agent memory sharing.

Identity: Agent-Level Access Control

AgentCore Identity gives each agent its own identity and access management layer. Agents can authenticate to AWS services and third-party applications (Slack, Zoom, GitHub) using standard identity providers like Okta, Entra, or Amazon Cognito. This is the piece most in-house agent platforms skip entirely, and it is the piece that blocks production deployment for security-conscious organizations.

Instead of hardcoding API keys or passing credentials through environment variables, agents get proper IAM-style permissions. You define what each agent can access, and Identity enforces it.

Observability: Tracing Agent Execution

AgentCore Observability provides step-by-step visualization of agent execution with metadata tagging, custom scoring, trajectory inspection, and debugging filters. If you have worked with agent systems in production, you know that “the agent gave a wrong answer” is useless as a bug report. You need to see exactly which tool calls the agent made, what context it had at each step, and where the reasoning went wrong.

Related: Agentic AI Observability: Why It Is the New Control Plane

How AgentCore Compares to Google ADK and OpenAI Agents SDK

The competitive landscape for agent platforms has three tiers: full infrastructure platforms (AgentCore, Azure AI Agent Service), framework-first offerings (Google ADK, OpenAI Agents SDK), and open-source stacks (LangGraph + custom infra).

Google ADK is framework-first. It gives you the SDK for building agents with deep Gemini integration, MCP and A2A support, and multi-agent routing. But it does not include a managed runtime, memory service, or identity layer comparable to AgentCore. You get the building blocks; you manage the infrastructure. For teams already on GCP with Vertex AI, ADK is the natural choice. For teams that need turnkey deployment infrastructure, it leaves gaps.

OpenAI Agents SDK is lightweight and model-flexible. It provides agent primitives (tools, handoffs, guardrails) with a clean Python API. But like ADK, it is a framework, not a platform. There is no managed runtime, no built-in memory persistence, no gateway. OpenAI’s hosted options (Assistants API replacement) exist but are tightly coupled to OpenAI’s models.

AgentCore’s differentiation is that it is infrastructure, not framework. It does not care which SDK you use to build your agent. You can run a LangGraph agent, a CrewAI crew, a Google ADK agent, or even an OpenAI Agents SDK agent on AgentCore Runtime. The platform handles the operational concerns (scaling, auth, memory, monitoring) while you keep full control of the agent logic.

The tradeoff: this flexibility comes with AWS lock-in on the infrastructure layer. Your agent code is portable, but your deployment pipeline, memory stores, and identity configuration are not.

When AgentCore Makes Sense (and When It Doesn’t)

Use AgentCore when:

  • You are already on AWS and running multiple agents that need shared memory, tool access, and identity management
  • Your agents need to authenticate to third-party services (Gateway’s OAuth handling saves weeks of integration work)
  • You need production-grade observability and cannot justify building a custom tracing pipeline
  • Your security team requires agent-level IAM and session isolation before approving production deployment
  • You want to run agents from multiple frameworks without managing separate infrastructure for each

Skip AgentCore when:

  • You are running a single agent prototype. The overhead of configuring Runtime, Gateway, and Identity is not worth it for a proof-of-concept
  • You are all-in on GCP or Azure. Each cloud has its own agent platform story, and cross-cloud agent infrastructure adds unnecessary friction
  • Your agents are simple function-calling wrappers that do not need persistent memory, complex tool routing, or multi-agent coordination
  • Cost predictability is critical. Consumption-based pricing means your bill scales with agent activity, which can surprise teams used to fixed infrastructure costs

Real-World Adoption So Far

Since GA, AWS partners have been building on AgentCore across industries. Epsilon reported 30% faster campaign setup times, 20% more personalization, and 8 hours saved per team per week after deploying marketing agents on the platform. NVIDIA has published integration guides for running NeMo-based agents on AgentCore Runtime.

The platform also supports agent-to-agent protocol communication, which means agents deployed on AgentCore can coordinate with agents on other platforms using standardized protocols. For enterprises running mixed agent ecosystems (some on AWS, some on-prem, some on other clouds), this interoperability is the feature that unlocks multi-agent architectures without forcing everything onto a single platform.

Related: Why AI Agents Fail in Production: 7 Lessons from Real Deployments

Pricing Breakdown

AgentCore’s consumption-based model means you pay for what you use with no upfront commitments:

ServicePrice
Runtime (CPU)$0.0895/vCPU-hour
Runtime (Memory)Based on peak consumption
Gateway$0.005/1,000 invocations
Short-term Memory$0.25/1,000 events
Code InterpreterPer-session pricing
BrowserPer-session pricing

For a team running 10 agents with moderate activity (a few hundred invocations per day each), expect monthly costs in the low hundreds of dollars for the AgentCore services themselves, before model inference costs. The pricing is competitive with rolling your own on ECS or EKS once you factor in the engineering time for memory, auth, and observability.

Full pricing details are on the AWS pricing page.

Frequently Asked Questions

What is Amazon Bedrock AgentCore?

Amazon Bedrock AgentCore is a managed AWS platform for deploying and operating AI agents at enterprise scale. It includes five core services: Runtime (serverless agent execution), Gateway (MCP-compatible tool access), Memory (short-term and long-term context), Identity (agent-level access control), and Observability (execution tracing and debugging). It works with any agent framework including LangGraph, CrewAI, and OpenAI Agents SDK.

How much does Amazon Bedrock AgentCore cost?

AgentCore uses consumption-based pricing with no upfront commitments. Runtime costs $0.0895 per vCPU-hour (billed per second, idle time is free), Gateway costs $0.005 per 1,000 tool invocations, and short-term Memory costs $0.25 per 1,000 events. Model inference costs are separate. For a team of 10 moderately active agents, expect AgentCore infrastructure costs in the low hundreds per month.

Does AgentCore work with LangGraph, CrewAI, or other open-source frameworks?

Yes. AgentCore is framework-agnostic. It supports any Python agent framework including LangGraph, CrewAI, LlamaIndex, Strands Agents, Google ADK, and OpenAI Agents SDK. You bring your agent code built with any framework, and AgentCore handles the infrastructure: runtime, memory, tool access, authentication, and monitoring.

How does AgentCore compare to Google ADK and OpenAI Agents SDK?

AgentCore is an infrastructure platform, while Google ADK and OpenAI Agents SDK are frameworks. ADK and the Agents SDK help you build agent logic but leave infrastructure (runtime, memory, identity, observability) to you. AgentCore provides the managed infrastructure and accepts agents built with any framework, including ADK and the Agents SDK. The tradeoff is AWS infrastructure lock-in in exchange for turnkey deployment and operations.

What is AgentCore Gateway and how does it relate to MCP?

AgentCore Gateway is a managed MCP (Model Context Protocol) server that converts existing APIs, Lambda functions, and services into MCP-compatible tools that agents can discover and invoke. It handles OAuth authorization and credential management, so agents can securely access third-party services like Slack, Salesforce, and Jira without custom auth code for each integration.