

# Amazon Bedrock AgentCore Memory examples
<a name="memory-examples"></a>

You can use AgentCore Memory with a variety of SDKS and agent frameworks.

**Topics**
+ [Scenario: A customer support AI agent using AgentCore Memory](memory-customer-scenario.md)
+ [Integrate AgentCore Memory with LangChain or LangGraph](memory-integrate-lang.md)
+ [AWS SDK](aws-sdk-memory.md)
+ [Amazon Bedrock AgentCore SDK](agentcore-sdk-memory.md)
+ [Strands Agents SDK](strands-sdk-memory.md)

# Scenario: A customer support AI agent using AgentCore Memory
<a name="memory-customer-scenario"></a>

In this section you learn how to build a customer support AI agent that uses AgentCore Memory to provide personalized assistance by maintaining conversation history and extracting long-term insights about user preferences. The topic includes code examples for the AgentCore CLI and the AWS SDK.

Consider a customer, Sarah, who engages with your shopping website’s support AI agent to inquire about a delayed order. The interaction flow through the AgentCore Memory APIs would look like this:

![\[Memory AgentCore Memory\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/memory-short-long-term.png)


**Topics**
+ [Step 1: Create an AgentCore Memory](#create-memory-resource)
+ [Step 2: Start the session](#start-session)
+ [Step 3: Capture the conversation history](#capture-conversation)
+ [Step 4: Generate long-term memory](#generate-longterm-memory)
+ [Step 5: Retrieve past interactions from short-term memory](#retrieve-shortterm-memory)
+ [Step 6: Use long-term memories for personalized assistance](#use-longterm-memory)

## Step 1: Create an AgentCore Memory
<a name="create-memory-resource"></a>

First, you create a memory resource with both short-term and long-term memory capabilities, configuring the strategies for what long-term information to extract.

**Example**  

1. Create memory with a semantic strategy:

   ```
   agentcore add memory --name CustomerSupportSemantic --strategies SEMANTIC
   agentcore deploy
   ```
**Note**  
The AgentCore CLI provides memory resource management. For event operations (creating events, listing events, etc.), use the AWS Python SDK (Boto3) or AWS SDK.

1. Run `agentcore` to open the TUI, then select **add** and choose **Memory** :

1. Select the **Semantic** strategy:  
![\[Memory wizard: select SEMANTIC strategy\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-strategies.png)

1. Review the configuration and press Enter to confirm:  
![\[Memory wizard: review configuration\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-confirm.png)

1. 

   ```
   import boto3
   import time
   from datetime import datetime
   
   # Initialize the Boto3 clients for control plane and data plane operations
   control_client = boto3.client('bedrock-agentcore-control')
   data_client = boto3.client('bedrock-agentcore')
   
   print("Creating a new memory resource...")
   
   # Create the memory resource with defined strategies
   response = control_client.create_memory(
       name="ShoppingSupportAgentMemory",
       description="Memory for a customer support agent.",
       memoryStrategies=[
           {
               'summaryMemoryStrategy': {
                   'name': 'SessionSummarizer',
                   'namespaceTemplates': ['/summaries/{actorId}/{sessionId}/']
               }
           },
           {
               'userPreferenceMemoryStrategy': {
                   'name': 'UserPreferenceExtractor',
                   'namespaceTemplates': ['/users/{actorId}/preferences/']
               }
           }
       ]
   )
   
   memory_id = response['memory']['id']
   print(f"Memory resource created with ID: {memory_id}")
   
   # Poll the memory status until it becomes ACTIVE
   while True:
       mem_status_response = control_client.get_memory(memoryId=memory_id)
       status = mem_status_response.get('memory', {}).get('status')
       if status == 'ACTIVE':
           print("Memory resource is now ACTIVE.")
           break
       elif status == 'FAILED':
           raise Exception("Memory resource creation FAILED.")
       print("Waiting for memory to become active...")
       time.sleep(10)
   ```

## Step 2: Start the session
<a name="start-session"></a>

When Sarah initiates the conversation, the agent creates a new, and unique, session ID to track this interaction separately.

```
# Unique identifier for the customer, Sarah
sarah_actor_id = "user-sarah-123"

# Unique identifier for this specific support session
support_session_id = "customer-support-session-1"

print(f"Session started for Actor ID: {sarah_actor_id}, Session ID: {support_session_id}")
```

## Step 3: Capture the conversation history
<a name="capture-conversation"></a>

As Sarah explains her issue, the agent captures each turn of the conversation (both her questions and the agent’s responses). This populates the full conversation in short-term memory and provides the raw data for the long-term memory strategies to process.

```
print("Capturing conversational events...")

full_conversation_payload = [
    {
        'conversational': {
            'role': 'USER',
            'content': {'text': "Hi, my order #ABC-456 is delayed."}
        }
    },
    {
        'conversational': {
            'role': 'ASSISTANT',
            'content': {'text': "I'm sorry to hear that, Sarah. Let me check the status for you."}
        }
    },
    {
        'conversational': {
            'role': 'USER',
            'content': {'text': "By the way, for future orders, please always use FedEx. I've had issues with other carriers."}
        }
    },
    {
        'conversational': {
            'role': 'ASSISTANT',
            'content': {'text': "Thank you for that information. I have made a note to use FedEx for your future shipments."}
        }
    }
]

data_client.create_event(
    memoryId=memory_id,
    actorId=sarah_actor_id,
    sessionId=support_session_id,
    eventTimestamp=datetime.now(),
    payload=full_conversation_payload
)

print("Conversation history has been captured in short-term memory.")
```

## Step 4: Generate long-term memory
<a name="generate-longterm-memory"></a>

In the background, the asynchronous extraction process runs. This process analyzes the recent raw events using your configured memory strategies to extract long-term memories such as summaries, semantic facts, or user preferences, which are then stored for future use.

## Step 5: Retrieve past interactions from short-term memory
<a name="retrieve-shortterm-memory"></a>

To provide context-aware assistance, the agent loads the current conversation history. This helps the agent understand what issues Sarah has raised in the ongoing chat.

```
print("\nRetrieving current conversation history from short-term memory...")

response = data_client.list_events(
    memoryId=memory_id,
    actorId=sarah_actor_id,
    sessionId=support_session_id,
    maxResults=10
)

# Reverse the list of events to display them in chronological order
event_list = reversed(response.get('events', []))

for event in event_list: print(event)
```

## Step 6: Use long-term memories for personalized assistance
<a name="use-longterm-memory"></a>

The agent performs a semantic search across extracted long-term memories to find relevant insights about Sarah’s preferences, order history, or past concerns. This lets the agent provide highly personalized assistance without needing to ask Sarah to repeat information she has already shared in previous chats.

```
# Wait for the asynchronous extraction to finish
print("\nWaiting 60 seconds for long-term memory processing...")
time.sleep(60)

# --- Example 1: Retrieve the user's shipping preference ---
print("\nRetrieving user preferences from long-term memory...")
preference_response = data_client.retrieve_memory_records(
    memoryId=memory_id,
    namespace=f"/users/{sarah_actor_id}/preferences/",
    searchCriteria={"searchQuery": "Does the user have a preferred shipping carrier?"}
)
for record in preference_response.get('memoryRecordSummaries', []):
    print(f"- Retrieved Record: {record}")

# --- Example 2: Broad query about the user's issue ---
print("\nPerforming a broad search for user's reported issues...")
issue_response = data_client.retrieve_memory_records(
    memoryId=memory_id,
    namespace=f"/summaries/{sarah_actor_id}/{support_session_id}/",
    searchCriteria={"searchQuery": "What problem did the user report with their order?"}
)
for record in issue_response.get('memoryRecordSummaries', []):
    print(f"- Retrieved Record: {record}")
```

This integrated approach lets the agent maintain rich context across sessions, recognize returning customers, recall important details, and deliver personalized experiences seamlessly, resulting in faster, more natural, and effective customer support.

# Integrate AgentCore Memory with LangChain or LangGraph
<a name="memory-integrate-lang"></a>

 [LangChain and LangGraph](https://www.langchain.com/langgraph) are powerful open-source frameworks for developing agents through a graph-based architecture. They provide a simple interface for defining agent interactions with the user, its tools, and memory.

Within LangGraph there are two main memory concepts when it comes to memory [persistence](https://docs.langchain.com/oss/python/langgraph/persistence) . Short-term, raw context is saved through checkpoint objects, while intelligent long term memory retrieval is done by saving and searching through memory stores. To address these two use cases, integrations were created to cover both the checkpointing workflow and the store workflow:
+  `AgentCoreMemorySaver` - used to save and load checkpoint objects that include user and AI messages, graph execution state, and additional metadata
+  `AgentCoreMemoryStore` - used to save conversational messages, leaving the AgentCore Memory service to extract insights, summaries, and user preferences in the background, then letting the agent search through those intelligent memories in future conversations

These integrations are easy to set up, requiring only specifying the Memory ID of a AgentCore Memory. Because they are saved to persistent storage within the service, there is no need to worry about losing these interactions through container exits, unreliable in-memory solutions, or agent application crashes.

**Topics**
+ [Prerequisites](#prerequisites)
+ [Configuration for short term memory persistence](#memory-short-term-memory)
+ [Configuration for intelligent long term memory search](#long-term-memory)
+ [Create the agent with configurations](#create-agent)
+ [Invoke the agent](#memory-gs-invoke-agent)
+ [Resources](#resources)

## Prerequisites
<a name="prerequisites"></a>

Requirements you need before integrating AgentCore Memory with LangChain and LangGraph.

1.  AWS account with Bedrock Amazon Bedrock AgentCore access

1. Configured AWS credentials (boto3)

1. An AgentCore Memory

1. Required IAM permissions:
   +  `bedrock-agentcore:CreateEvent` 
   +  `bedrock-agentcore:ListEvents` 
   +  `bedrock-agentcore:RetrieveMemories` 

## Configuration for short term memory persistence
<a name="memory-short-term-memory"></a>

The `AgentCoreMemorySaver` in LangGraph handles all the saving and loading of conversational state, execution context, and state variables under the hood through [AgentCore Memory blob types](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html#API_CreateEvent_RequestSyntax) . This means that the only setup required is to specify the checkpointer when compiling the agent graph, then providing an `actor_id` and `thread_id` in the [RunnableConfig](https://python.langchain.com/docs/concepts/runnables/#runnableconfig) when invoking the agent. The configuration is shown below and the agent invocation is shown in the next section. If simple conversation persistence is all your application needs, feel free to skip the long term memory section.

```
# Import LangGraph and LangChain components
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent

# Import the AgentCore Memory integrations
from langgraph_checkpoint_aws import AgentCoreMemorySaver

REGION = "us-west-2"
MEMORY_ID = "YOUR_MEMORY_ID"
MODEL_ID = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"

# Initialize checkpointer for state persistence. No additional setup required.
# Sessions will be saved and persisted for actor_id/session_id combinations
checkpointer = AgentCoreMemorySaver(MEMORY_ID, region_name=REGION)
```

## Configuration for intelligent long term memory search
<a name="long-term-memory"></a>

For long term memory stores in LangGraph, you have more flexibility on how messages are processed. For instance, if the application is only concerned with user preferences, you would only need to store the `HumanMessage` objects in the conversation. For summaries, all types `HumanMessage` , `AIMessage` , and `ToolMessage` would be relevant. There are numerous ways to do this, but a common implementation pattern is using pre and post model hooks, as shown in the example below. For retrieval of memories, you may add a `store.search(query)` call in the pre-model hook and append it to the user’s message so the agent has all the context. Alternatively, the agent could be provided a tool to search for information as needed. All of these implementation patterns are supported and the implementation will vary based on the application.

```
from langgraph_checkpoint_aws import (
    AgentCoreMemoryStore
)

# Initialize store for saving and searching over long term memories
# such as preferences and facts across sessions
store = AgentCoreMemoryStore(MEMORY_ID, region_name=REGION)

# Pre-model hook runs and saves messages of your choosing to AgentCore Memory
# for async processing and extraction
def pre_model_hook(state, config: RunnableConfig, *, store: BaseStore):
    """Hook that runs pre-LLM invocation to save the latest human message"""
    actor_id = config["configurable"]["actor_id"]
    thread_id = config["configurable"]["thread_id"]

    # Saving the message to the actor and session combination that we get at runtime
    namespace = (actor_id, thread_id)

    messages = state.get("messages", [])
    # Save the last human message we see before LLM invocation
    for msg in reversed(messages):
        if isinstance(msg, HumanMessage):
            store.put(namespace, str(uuid.uuid4()), {"message": msg})
            break

    # OPTIONAL: Retrieve user preferences based on the last message and append to state
    # user_preferences_namespace = ("preferences", actor_id)
    # preferences = store.search(user_preferences_namespace, query=msg.content, limit=5)
    # # Add to input messages as needed

    return {"llm_input_messages": messages}
```

## Create the agent with configurations
<a name="create-agent"></a>

Initialize the LLM and create a LangGraph agent with a memory configuration.

```
# Initialize LLM
llm = init_chat_model(MODEL_ID, model_provider="bedrock_converse", region_name=REGION)

# Create a pre-built langgraph agent (configurations work for custom agents too)
graph = create_react_agent(
    model=llm,
    tools=tools,
    checkpointer=checkpointer, # AgentCoreMemorySaver we created above
    store=store, # AgentCoreMemoryStore we created above
    pre_model_hook=pre_model_hook, # OPTIONAL: Function we defined to save user messages
    # post_model_hook=post_model_hook # OPTIONAL: Can save AI messages to memory if needed
)
```

## Invoke the agent
<a name="memory-gs-invoke-agent"></a>

Invoke the agent.

```
# Specify config at runtime for ACTOR and SESSION
config = {
    "configurable": {
        "thread_id": "session-1", # REQUIRED: This maps to Bedrock AgentCore session_id under the hood
        "actor_id": "react-agent-1", # REQUIRED: This maps to Bedrock AgentCore actor_id under the hood
    }
}

# Invoke the agent
response = graph.invoke(
    {"messages": [("human", "I like sushi with tuna. In general seafood is great.")]},
    config=config
)

# ... agent will answer

# Agent will have the conversation and state persisted on the next message
# Because the session ID is the same in the runtime config
response = graph.invoke(
    {"messages": [("human", "What did I just say?")]},
    config=config
)

# Define a new session in the runtime config to test long term retrieval
config = {
    "configurable": {
        "thread_id": "session-2", # New session ID
        "actor_id": "react-agent-1", # Same actor ID
    }
}

# Invoke the agent (it will retrieve long term memories from other session)
response = graph.invoke(
    {"messages": [("human", "Lets make a meal tonight, what should I cook?")]},
    config=config
)
```

## Resources
<a name="resources"></a>
+  [LangChain x AWS Github Repo](https://github.com/langchain-ai/langchain-aws/tree/main) 
+  [Pypi package](https://pypi.org/project/langgraph-checkpoint-aws/) 
+  [AgentCoreMemorySaver implementation](https://github.com/langchain-ai/langchain-aws/blob/main/libs/langgraph-checkpoint-aws/langgraph_checkpoint_aws/agentcore/saver.py) 
+  [AgentCoreMemorySaver sample notebook (checkpointing only)](https://github.com/langchain-ai/langchain-aws/blob/main/samples/memory/agentcore_memory_checkpointer.ipynb) 

# AWS SDK
<a name="aws-sdk-memory"></a>

Use the AWS SDK to directly interact with AgentCore Memory fine-grained control over memory operations. The following examples show how to access the AWS SDK with the [SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html).

 **Install dependencies** 

```
pip install boto3
```

 **Add short-term memory** 

```
import boto3
from datetime import datetime

# Initialize boto3 clients
control_client = boto3.client('bedrock-agentcore-control', region_name='us-east-1')
data_client = boto3.client('bedrock-agentcore', region_name='us-east-1')

# Create short-term memory
memory_response = control_client.create_memory(
    name="BasicMemory",
    description="Basic memory for short-term event storage",
    eventExpiryDuration=90
)

memory_id = memory_response['memory']['id']
actor_id = f"actor_{datetime.now().strftime('%Y%m%d%H%M%S')}"
session_id = f"session_{datetime.now().strftime('%Y%m%d%H%M%S')}"

# Create event with multiple conversation turns
event = data_client.create_event(
    memoryId=memory_id,
    actorId=actor_id,
    sessionId=session_id,
    eventTimestamp=datetime.now(),
    payload=[
        {
            'conversational': {
                'content': {'text': 'I like sushi with tuna'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'That sounds delicious! Tuna sushi is a great choice.'},
                'role': 'ASSISTANT'
            }
        },
        {
            'conversational': {
                'content': {'text': 'I also like pizza'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'Pizza is another excellent choice! You have great taste in food.'},
                'role': 'ASSISTANT'
            }
        }
    ]
)
```

 **Add long-term memory with strategies** 

```
import boto3
import time
from datetime import datetime

# Initialize boto3 clients
control_client = boto3.client('bedrock-agentcore-control', region_name='us-east-1')
data_client = boto3.client('bedrock-agentcore', region_name='us-east-1')

# Create long-term memory
memory_response = control_client.create_memory(
    name=f"ComprehensiveMemory",
    description="Memory with strategies for long-term memory extraction",
    eventExpiryDuration=90,
    memoryStrategies=[
        {
            'summaryMemoryStrategy': {
                'name': 'SessionSummarizer',
                'namespaceTemplates': ['/summaries/{actorId}/{sessionId}/']
            }
        },
        {
            'userPreferenceMemoryStrategy': {
                'name': 'PreferenceLearner',
                'namespaceTemplates': ['/preferences/{actorId}/']
            }
        },
        {
            'semanticMemoryStrategy': {
                'name': 'FactExtractor',
                'namespaceTemplates': ['/facts/{actorId}/']
            }
        }
    ]
)

memory_id = memory_response['memory']['id']
actor_id = f"actor_{datetime.now().strftime('%Y%m%d%H%M%S')}"
session_id = f"session_{datetime.now().strftime('%Y%m%d%H%M%S')}"

########## Wait for long-term memory to become active ##########

while True:
    mem_status_response = control_client.get_memory(memoryId=memory_id)
    status = mem_status_response.get('memory', {}).get('status')
    if status == 'ACTIVE':
        print("Memory resource is now ACTIVE.")
        break
    elif status == 'FAILED':
        raise Exception("Memory resource creation FAILED.")
    print("Waiting for memory to become active...")
    time.sleep(10)

# Create single event with all conversation turns
event = data_client.create_event(
    memoryId=memory_id,
    actorId=actor_id,
    sessionId=session_id,
    eventTimestamp=datetime.now(),
    payload=[
        {
            'conversational': {
                'content': {'text': 'I like sushi with tuna'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'That sounds delicious! Tuna sushi is a great choice.'},
                'role': 'ASSISTANT'
            }
        },
        {
            'conversational': {
                'content': {'text': 'I also like pizza'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'Pizza is another excellent choice! You have great taste in food.'},
                'role': 'ASSISTANT'
            }
        }
    ]
)
```

Full AWS SDK Amazon Bedrock AgentCore AgentCore Memory API reference can be found at:
+  [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore.html](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore.html) 
+  [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore-control.html](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore-control.html) 

# Amazon Bedrock AgentCore SDK
<a name="agentcore-sdk-memory"></a>

Use the [Amazon Bedrock AgentCore Python SDK](https://github.com/aws/bedrock-agentcore-sdk-python) for a higher-level abstraction that simplifies memory operations and provides convenient methods for common use cases.

 **Install dependencies** 

```
pip install bedrock-agentcore
```

 **Add short-term memory** 

```
from bedrock_agentcore.memory import MemoryClient

client = MemoryClient(region_name="us-east-1")

memory = client.create_memory(
    name="CustomerSupportAgentMemory",
    description="Memory for customer support conversations",
)

client.create_event(
    memory_id=memory.get("id"), # This is the id from create_memory or list_memories
    actor_id="User84",  # This is the identifier of the actor, could be an agent or end-user.
    session_id="OrderSupportSession1", #Unique id for a particular request/conversation.
    messages=[
        ("Hi, I'm having trouble with my order #12345", "USER"),
        ("I'm sorry to hear that. Let me look up your order.", "ASSISTANT"),
        ("lookup_order(order_id='12345')", "TOOL"),
        ("I see your order was shipped 3 days ago. What specific issue are you experiencing?", "ASSISTANT"),
        ("Actually, before that - I also want to change my email address", "USER"),
        (
            "Of course! I can help with both. Let's start with updating your email. What's your new email?",
            "ASSISTANT",
        ),
        ("newemail@example.com", "USER"),
        ("update_customer_email(old='old@example.com', new='newemail@example.com')", "TOOL"),
        ("Email updated successfully! Now, about your order issue?", "ASSISTANT"),
        ("The package arrived damaged", "USER"),
    ],
)
```

 **Add long-term memory with strategies** 

```
from bedrock_agentcore.memory import MemoryClient
import time

client = MemoryClient(region_name="us-east-1")

memory = client.create_memory_and_wait(
    name="MyAgentMemory",
    strategies=[{
        "summaryMemoryStrategy": {
            # Name of the extraction model/strategy
            "name": "SessionSummarizer",
            # Organize facts by session ID for easy retrieval
            # Example: "summaries/session123" contains summary of session123
            "namespaceTemplates": ["/summaries/{actorId}/{sessionId}/"]
        }
    }]
)

event = client.create_event(
    memory_id=memory.get("id"), # This is the id from create_memory or list_memories
    actor_id="User84",  # This is the identifier of the actor, could be an agent or end-user.
    session_id="OrderSupportSession1",
    messages=[
        ("Hi, I'm having trouble with my order #12345", "USER"),
        ("I'm sorry to hear that. Let me look up your order.", "ASSISTANT"),
        ("lookup_order(order_id='12345')", "TOOL"),
        ("I see your order was shipped 3 days ago. What specific issue are you experiencing?", "ASSISTANT"),
        ("Actually, before that - I also want to change my email address", "USER"),
        (
            "Of course! I can help with both. Let's start with updating your email. What's your new email?",
            "ASSISTANT",
        ),
        ("newemail@example.com", "USER"),
        ("update_customer_email(old='old@example.com', new='newemail@example.com')", "TOOL"),
        ("Email updated successfully! Now, about your order issue?", "ASSISTANT"),
        ("The package arrived damaged", "USER"),
    ],
)

# Wait for meaningful memories to be extracted from the conversation.
time.sleep(60)

# Query for the summary of the issue using the namespace set in summary strategy above
memories = client.retrieve_memories(
    memory_id=memory.get("id"),
    namespace=f"/summaries/User84/OrderSupportSession1/",
    query="can you summarize the support issue"
)
```

# Strands Agents SDK
<a name="strands-sdk-memory"></a>

Use the [Strands Agents](https://strandsagents.com/latest/) SDK for seamless integration with agent frameworks, providing automatic memory management and retrieval within conversational agents.

First, create a memory with all three long-term strategies. You can do this with the AgentCore CLI or through the SDK code in the examples below.

**Example**  

1. The AgentCore CLI memory commands must be run inside an existing agentcore project. If you don’t have one yet, create a project first:

   ```
   agentcore create --name my-agent --no-agent
   cd my-agent
   ```

   Then add memory and deploy:

   ```
   agentcore add memory --name ComprehensiveAgentMemory \
     --strategies SEMANTIC,SUMMARIZATION,USER_PREFERENCE
   agentcore deploy
   ```

1. Run `agentcore` to open the TUI, then select **add** and choose **Memory** :

1. Enter the memory name:  
![\[Memory wizard: enter ComprehensiveAgentMemory name\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/strands-memory-add-name.png)

1. Select all three strategies (Semantic, Summarization, User preference):  
![\[Memory wizard: select all three memory strategies\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/strands-memory-add-strategies.png)

1. Review the configuration and press Enter to confirm:  
![\[Memory wizard: confirm ComprehensiveAgentMemory with all strategies\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/strands-memory-add-confirm.png)

   Then run `agentcore deploy` to provision the memory in AWS.

 **Install dependencies** 

```
pip install bedrock-agentcore
pip install strands-agents
```

 **Add short-term memory** 

```
from datetime import datetime
from strands import Agent
from bedrock_agentcore.memory import MemoryClient
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig, RetrievalConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager

client = MemoryClient(region_name="us-east-1")
basic_memory = client.create_memory(
    name="BasicTestMemory",
    description="Basic memory for testing short-term functionality"
)

MEM_ID = basic_memory.get('id')
ACTOR_ID = "actor_id_test_%s" % datetime.now().strftime("%Y%m%d%H%M%S")
SESSION_ID = "testing_session_id_%s" % datetime.now().strftime("%Y%m%d%H%M%S")

# Configure memory
agentcore_memory_config = AgentCoreMemoryConfig(
    memory_id=MEM_ID,
    session_id=SESSION_ID,
    actor_id=ACTOR_ID
)

# Create session manager
session_manager = AgentCoreMemorySessionManager(
    agentcore_memory_config=agentcore_memory_config,
    region_name="us-east-1"
)

# Create agent
agent = Agent(
    system_prompt="You are a helpful assistant. Use all you know about the user to provide helpful responses.",
    session_manager=session_manager,
)

agent("I like sushi with tuna")
# Agent remembers this preference

agent("I like pizza")
# Agent acknowledges both preferences

agent("What should I buy for lunch today?")
# Agent suggests options based on remembered preferences
```

 **Add long-term memory with strategies** 

```
from bedrock_agentcore.memory import MemoryClient
from strands import Agent
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig, RetrievalConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager
from datetime import datetime

# Create comprehensive memory with all built-in strategies
client = MemoryClient(region_name="us-east-1")
comprehensive_memory = client.create_memory_and_wait(
    name="ComprehensiveAgentMemory",
    description="Full-featured memory with all built-in strategies",
    strategies=[
        {
            "summaryMemoryStrategy": {
                "name": "SessionSummarizer",
                "namespaceTemplates": ["/summaries/{actorId}/{sessionId}/"]
            }
        },
        {
            "userPreferenceMemoryStrategy": {
                "name": "PreferenceLearner",
                "namespaceTemplates": ["/preferences/{actorId}/"]
            }
        },
        {
            "semanticMemoryStrategy": {
                "name": "FactExtractor",
                "namespaceTemplates": ["/facts/{actorId}/"]
            }
        }
    ]
)

MEM_ID = comprehensive_memory.get('id')
ACTOR_ID = "actor_id_test_%s" % datetime.now().strftime("%Y%m%d%H%M%S")
SESSION_ID = "testing_session_id_%s" % datetime.now().strftime("%Y%m%d%H%M%S")

# Configure memory
agentcore_memory_config = AgentCoreMemoryConfig(
    memory_id=MEM_ID,
    session_id=SESSION_ID,
    actor_id=ACTOR_ID
)

# Create session manager
session_manager = AgentCoreMemorySessionManager(
    agentcore_memory_config=agentcore_memory_config,
    region_name="us-east-1"
)

# Create agent
agent = Agent(
    system_prompt="You are a helpful assistant. Use all you know about the user to provide helpful responses.",
    session_manager=session_manager,
)

agent("I like sushi with tuna")
# Agent remembers this preference

agent("I like pizza")
# Agent acknowledges both preferences

agent("What should I buy for lunch today?")
# Agent suggests options based on remembered preferences
```

 **Message batching** 

When `batch_size` is greater than 1, messages are buffered in memory and sent to AgentCore Memory in a single API call once the buffer reaches the configured size. This reduces the number of API requests in high-throughput conversations.

**Important**  
When using `batch_size > 1` , you **must** use a `with` block or call `close()` when the session is complete. Otherwise, any buffered messages that have not yet reached the batch threshold will be lost.

 *Recommended: Context manager* 

```
from strands import Agent
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager

config = AgentCoreMemoryConfig(
    memory_id=MEM_ID,
    session_id=SESSION_ID,
    actor_id=ACTOR_ID,
    batch_size=10,  # Buffer up to 10 messages before sending
)

# The `with` block guarantees all buffered messages are flushed on exit
with AgentCoreMemorySessionManager(config, region_name='us-east-1') as session_manager:
    agent = Agent(
        system_prompt="You are a helpful assistant.",
        session_manager=session_manager,
    )
    agent("Hello!")
    agent("Tell me about AWS")
# All remaining buffered messages are automatically flushed here
```

 *Alternative: Explicit close()* 

If you cannot use a `with` block, call `close()` manually:

```
session_manager = AgentCoreMemorySessionManager(config, region_name='us-east-1')
try:
    agent = Agent(
        system_prompt="You are a helpful assistant.",
        session_manager=session_manager,
    )
    agent("Hello!")
finally:
    session_manager.close()  # Flush any remaining buffered messages
```

More examples are available on GitHub: [https://github.com/aws/bedrock-agentcore-sdk-python/tree/main/src/bedrock_agentcore/memory/integrations/strands](https://github.com/aws/bedrock-agentcore-sdk-python/tree/main/src/bedrock_agentcore/memory/integrations/strands) 