

# Add memory to your Amazon Bedrock AgentCore agent
<a name="memory"></a>

AgentCore Memory is a fully managed service that gives your AI agents the ability to remember past interactions, enabling them to provide more intelligent, context-aware, and personalized conversations. It provides a simple and powerful way to handle both short-term context and long-term knowledge retention without the need to build or manage complex infrastructure.

AgentCore Memory addresses a fundamental challenge in agentic AI: statelessness. Without memory capabilities, AI agents treat each interaction as a new instance with no knowledge of previous conversations. AgentCore Memory provides this critical capability, allowing your agent to build a coherent understanding of users over time.

![\[Memory AgentCore Memory\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/memory-overview.png)


AgentCore Memory supports a variety of SDKs and agent frameworks. For examples, see [Amazon Bedrock AgentCore Memory examples](memory-examples.md).

**Topics**
+ [Memory types](#memory-types-overview)
+ [Memory key benefits](#memory-key-benefits)
+ [Common use cases of memory](#memory-common-use-cases)
+ [How it works](how-it-works.md)
+ [Get started with AgentCore Memory](memory-get-started.md)
+ [Create an AgentCore Memory](memory-create-a-memory-store.md)
+ [Use short-term memory](using-memory-short-term.md)
+ [Use long-term memory](long-term-memory-long-term.md)
+ [Amazon Bedrock AgentCore Memory examples](memory-examples.md)
+ [Amazon Bedrock capacity for built-in with overrides strategies](bedrock-capacity.md)
+ [Observability](memory-observability.md)
+ [Best practices](best-practices.md)

## Memory types
<a name="memory-types-overview"></a>

AgentCore Memory offers [two types](memory-types.md) of memory that work together to create intelligent, context-aware AI agents:

 **Short-term memory**   
Short-term memory captures turn-by-turn interactions within a single session. This lets agents maintain immediate context without requiring users to repeat information.  
 **Example:** When a user asks, "What’s the weather like in Seattle?" and follows up with "What about tomorrow?", the agent relies on recent conversation history to understand that "tomorrow" refers to the weather in Seattle.

 **Long-term memory**   
Long-term memory automatically extracts and stores key insights from conversations across multiple sessions, including user preferences, important facts, and session summaries — for persistent knowledge retention across multiple sessions.  
 **Example:** If a customer mentions they prefer window seats during flight booking, the agent stores this preference in long-term memory. In future interactions, the agent can proactively offer window seats, creating a personalized experience.

## Memory key benefits
<a name="memory-key-benefits"></a>
+  **Create more natural conversations:** By remembering previous turns in a conversation, agents can understand context, resolve ambiguous statements, and interact in a way that feels more human.
+  **Deliver personalized experiences:** Retain user preferences, historical data, and key facts across sessions to tailor responses and actions to individual users.
+  **Reduce development complexity:** Offload the undifferentiated heavy lifting of managing conversational state and memory, allowing you to focus on building your agent’s core business logic.

## Common use cases of memory
<a name="memory-common-use-cases"></a>
+  **Conversational agents:** A customer support chatbot remembers a user’s previous issues and preferences, enabling it to provide more relevant assistance in future interactions.
+  **Task-oriented / workflow agents:** An AI agent orchestrating a multi-step business process, such as invoice approval, uses memory to track the status of each step and maintain workflow progress.
+  **Multi-agent systems:** A team of AI agents managing a supply chain shares memory to synchronize inventory levels, anticipate demand, and optimize logistics.
+  **Autonomous or planning agents:** An autonomous vehicle uses memory to plan routes, adjust to traffic conditions, and learn from past experiences to improve future driving decisions.

# How it works
<a name="how-it-works"></a>

AgentCore Memory provides a set of APIs that let your AI agents seamlessly store, retrieve, and utilize both short-term and long-term memory. The architecture is designed to separate the immediate context of a conversation from the persistent knowledge that should be retained over time.

**Topics**
+ [Memory terminology](memory-terminology.md)
+ [Memory types](memory-types.md)
+ [Memory strategies](memory-strategies.md)
+ [Memory organization in AgentCore Memory](memory-organization.md)
+ [Memory record streaming](memory-record-streaming.md)
+ [Compare long-term memory with Retrieval-Augmented Generation](memory-ltm-rag.md)

# Memory terminology
<a name="memory-terminology"></a>

 **AgentCore Memory**   
The primary, top-level container for your agent’s memory resource. Each AgentCore Memory holds all the events and extracted insights for agents or applications.

 **Memory strategy**   
Memory strategies are configurable rules that determine how to process information from short-term memory into long-term memory. They determine what type of information is kept, turning raw conversations into structured and useful knowledge.

 **Namespace**   
A namespace is a structured path used to logically group and organize long-term memories. By defining a namespace in your memory strategy, you maintain that all extracted memories are organized under predictable paths, which aids in retrieval, filtering, and access control.

 **Memory record**   
A memory record is a structured unit of information within the memory resource. Each record is associated with a unique identifier and is stored within a specified namespace, allowing for organized retrieval and management.

 **Session**   
Represents a single, continuous interaction between a user and the agent, such as a customer support conversation. A unique `sessionId` is used to group all events within that conversation.

 **Actor**   
Represents the entity interacting with the agent. This can be a human user, another agent, or a system (software or hardware component) that initiates interactions with the agent. A unique `actorId` maintains that memory records are correctly associated with the individual or system.

 **Event**   
Event is the fundamental unit of short-term memory. It represents a discrete interaction or activity within a session, associated with a specific actor. Events are stored using the [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html) operation and are organized by `actorId` and `sessionId` . Each event is immutable and timestamped, capturing real-time data such as user messages, system actions, or tool invocations.

 **Event metadata**   
Event metadata refers to the supplementary information that provides context about an event in an AgentCore Memory. While not always explicitly required, metadata can enhance the organization and retrieval of events.

# Memory types
<a name="memory-types"></a>

AgentCore Memory offers two types of memory that work together to create intelligent, context-aware AI agents:

**Topics**
+ [Short-term memory](#short-term-memory)
+ [Long-term memory](#memory-long-term-memory)

## Short-term memory
<a name="short-term-memory"></a>

Short-term memory stores raw interactions that help the agent maintain context within a single session. For example, in a shopping website’s [customer support AI agent](memory-customer-scenario.md) , short-term memory captures the entire conversation history as a series of events. Each customer question and agent response is saved as a separate event (or in batches, depending on your implementation). This lets the agent reload the entire conversation as it happened, maintaining context even if the service restarts or the customer returns later to continue the same interaction seamlessly.

When a customer interacts with your agent, each interaction can be captured as an event using the [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html) operation. Events can contain various types of data, including conversational exchanges (questions, answers, instructions) or structured information (product details, order status). Each event is associated with a session via a session identifier ( `sessionId` ), which you can define or let the system generate by default. You can use the `sessionId` parameter in future requests to maintain conversation context.

To load previous sessions/conversations or enrich context, your agent needs to access the raw interactions with the customer. Imagine a customer returns to follow up on their product support case from last week. To provide seamless assistance, the agent uses [ListSessions](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_ListSessions.html) to locate their previous support interactions. Through [ListEvents](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_ListEvents.html) , it retrieves the conversation history, understanding the reported issue, troubleshooting steps attempted, and any temporary solutions discussed. The agent uses [GetEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_GetEvent.html) to access specific information from key moments in past conversations. These operations work together to maintain support continuity across sessions, eliminating the need for customers to re-explain their issue or repeat troubleshooting steps already attempted.

### Event metadata
<a name="event-metadata"></a>

Event metadata lets you attach additional context information to your short-term memory events as key-value pairs. When creating events using the `CreateEvent` operation, you can include metadata that isn’t part of the core event content but provides valuable context for retrieval. For example, a travel booking agent can attach location metadata to events, making it easy to find all conversations that mentioned specific destinations. You can then use the `ListEvents` operation with metadata filters to efficiently retrieve events based on these attached properties, enabling your agent to quickly locate relevant conversation history without scanning through entire sessions. This capability is useful for agents that need to track and retrieve specific attributes across conversations, such as product categories in e-commerce, case types in customer support, or project identifiers in task management applications. Event metadata is not meant to store sensitive content, as it is not encrypted with customer managed key.

## Long-term memory
<a name="memory-long-term-memory"></a>

Long-term memory records store structured information extracted from raw agent interactions, which is retained across multiple sessions. Long-term memory preserves only the key insights such as summaries of the conversations, facts and knowledge, or user preferences. For example, if a customer tells the agent their preferred shoe brand during a conversation, the AI agent stores this as a long-term memory. Later, even in a different conversation, the agent can remember and suggest the shoe brand, making the interaction personalized and relevant.

Long-term memory generation is an asynchronous process that runs in the background and automatically extracts insights after raw conversation/context is stored in short-term memory via `CreateEvent` . This efficiently consolidates key information without interrupting live interactions. As part of the long-term memory generation, AgentCore Memory performs the following operations:
+  **Extraction** : Extracts information from raw interactions with the agent
+  **Consolidation** : Consolidates newly extracted information with existing information in the AgentCore Memory.

Once long-term memory records are generated, you can retrieve these extracted memories to enhance your agent’s responses. Extracted memories are stored as memory records and can be accessed using the [GetMemoryRecord](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_GetMemoryRecord.html) , [ListMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_ListMemoryRecords.html) , or [RetrieveMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_RetrieveMemoryRecords.html) operations. The `RetrieveMemoryRecords` operation is powerful as it performs a semantic search to find memory records that are most relevant to the query. For example, when a customer asks about running shoes, the agent can use semantic search to retrieve related memory records, such as customer’s preferred shoe size, favorite shoe brands, and previous shoe purchases. This lets the AI support agent provide highly personalized recommendations without requiring the customer to repeat information they’ve shared before.

# Memory strategies
<a name="memory-strategies"></a>

In AgentCore Memory, you can add memory strategies to your memory resource. These strategies determine what types of information to extract from raw conversations. Strategies are configurations that intelligently capture and persist key concepts from interactions, sent as events in the [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html) operation. You can add strategies to the memory resource as part of [CreateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CreateMemory.html) or [UpdateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_UpdateMemory.html) operations. Once enabled, these strategies are automatically executed on raw conversation events associated with that memory resource to extract long-term memories.

If no strategies are specified, long-term memory records will not be extracted for that memory.

AgentCore Memory supports a variety of memory strategies:

**Topics**
+ [Built-in strategies](#memory-built-in-strategies-desc)
+ [Built-in overrides](#memory-built-in-overrides-desc)
+ [Self-managed strategies](#memory-self-managed-strategies-desc)
+ [Built-in strategies](built-in-strategies.md)
+ [Customize a built-in strategy or create your own strategy](memory-custom-strategy.md)
+ [Self-managed strategy](memory-self-managed-strategies.md)

## Built-in strategies
<a name="memory-built-in-strategies-desc"></a>

AgentCore handles all memory extraction and consolidation automatically with predefined algorithms.
+ AgentCore handles all memory extraction and consolidation automatically
+ No configuration required beyond basic trigger settings
+ Uses predefined algorithms optimized and benchmarked for common use cases
+ Suitable for standard conversational AI applications
+ Limited customization options
+ Higher cost for storage

## Built-in overrides
<a name="memory-built-in-overrides-desc"></a>

Extends built-in strategies with targeted customization while using an AgentCore managed extraction pipeline.
+ Extends built-in strategies with targetted customization
+ Allows modification of prompts while still using AgentCore managed extraction pipeline
+ Provides support for bedrock models (invoked in your account)
+ Lower cost for storage than built-ins

## Self-managed strategies
<a name="memory-self-managed-strategies-desc"></a>

You have complete ownership of memory processing pipeline with custom extraction and consolidation algorithms.
+ Complete ownership of memory processing pipeline
+ Custom extraction and consolidation algorithms using any model, prompts, etc.
+ Full control over memory record schemas, namespaces etc.
+ Integration with external systems and databases
+ Requires infrastructure setup and maintenance
+ Lower cost for storage than built-in strategies

A single memory resource can be configured to utilize both built-in and custom strategies simultaneously, providing flexibility to address diverse memory requirements.

# Built-in strategies
<a name="built-in-strategies"></a>

AgentCore Memory provides built-in strategies to create memories. Each built-in strategy consists of steps to handle memory creation, including the following (different strategies employ different steps):
+  **Extraction** – Identifies useful insights from short-term memory to place into long-term memory as memory records.
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.
+  **Reflection** – Insights are generated across episodes.

Each step is defined by a system prompt, which is a combination of the following:
+  **Instructions** – Guide the LLM’s behavior. Can include step-by-step processing guidelines (how the model should reason and extract or consolidate information).
+  **Output schema** – How the model should present the result.

Each memory strategy provides a structured output format tailored to its purpose. The output is not uniform across strategies, because the type of information being stored and retrieved differs. This maintains that each memory type exposes only the fields most relevant to its strategy. You can find the output formats in the system prompts for each strategy.

You can combine multiple strategies when creating memories.

**Topics**
+ [Semantic memory strategy](semantic-memory-strategy.md)
+ [User preference memory strategy](user-preference-memory-strategy.md)
+ [Summary strategy](summary-strategy.md)
+ [Episodic memory strategy](episodic-memory-strategy.md)

# Semantic memory strategy
<a name="semantic-memory-strategy"></a>

The semantic memory strategy is designed to identify and extract key pieces of factual information and contextual knowledge from conversational data. This lets your agent to build a persistent knowledge base about the entities, events, and key details discussed during an interaction.

 **Steps in the strategy** 

The semantic memory strategy includes the following steps:
+  **Extraction** – Identifies useful insights from short-term memory to place into long-term memory as memory records.
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.

**Note**  
The semantic strategy processes only `USER` and `ASSISTANT` role messages during extraction. For more information about roles in agent conversations, see [Conversational](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_Conversational.html).

 **Strategy output** 

The semantic memory strategy returns facts as JSON objects, each representing a standalone personal fact about the user.

 **Example of facts captured by this strategy** 
+ An order number ( \$1XYZ-123 ) is associated with a specific support case.
+ A project’s deadline of October 25th.
+ The user is running version 2.1 of the software.

By referencing this stored knowledge, your agent can provide more accurate, context-aware responses, perform multi-step tasks that rely on previously stated information, and avoid asking users to repeat key details.

 **Default namespace** 

 `/strategy/{memoryStrategyId}/actors/{actorId}/` 

**Topics**
+ [System prompt for semantic memory strategy](memory-system-prompt.md)

# System prompt for semantic memory strategy
<a name="memory-system-prompt"></a>

The semantic strategy includes instructions and output schemas in the default prompts for the extraction and consolidation steps.

## Extraction instructions
<a name="semantic-memory-extraction-instructions"></a>

```
You are a long-term memory extraction agent supporting a lifelong learning system. Your task is to identify and extract meaningful information about the users from a given list of messages.

Analyze the conversation and extract structured information about the user according to the schema below. Only include details that are explicitly stated or can be logically inferred from the conversation.

- Extract information ONLY from the user messages. You should use assistant messages only as supporting context.
- If the conversation contains no relevant or noteworthy information, return an empty list.
- Do NOT extract anything from prior conversation history, even if provided. Use it solely for context.
- Do NOT incorporate external knowledge.
- Avoid duplicate extractions.

IMPORTANT: Maintain the original language of the user's conversation. If the user communicates in a specific language, extract and format the extracted information in that same language.
```

## Extraction output schema
<a name="extraction-output-schema"></a>

```
Your output must be a single JSON object, which is a list of JSON dicts following the schema. Do not provide any preamble or any explanatory text.

<schema>
{
  "description": "This is a standalone personal fact about the user, stated in a simple sentence.\\nIt should represent a piece of personal information, such as life events, personal experience, and preferences related to the user.\\nMake sure you include relevant details such as specific numbers, locations, or dates, if presented.\\nMinimize the coreference across the facts, e.g., replace pronouns with actual entities.",
  "properties": {
    "fact": {
      "description": "The memory as a well-written, standalone fact about the user. Refer to the user's instructions for more information the prefered memory organization.",
      "title": "Fact",
      "type": "string"
    }
  },
  "required": [
    "fact"
  ],
  "title": "SemanticMemory",
  "type": "object"
}
</schema>
```

## Consolidation instructions
<a name="semantic-memory-consolidation-instructions"></a>

```
You are a conservative memory manager that preserves existing information while carefully integrating new facts.

Your operations are:
- **AddMemory**: Create new memory entries for genuinely new information
- **UpdateMemory**: Add complementary information to existing memories while preserving original content
- **SkipMemory**: No action needed (information already exists or is irrelevant)

If the operation is "AddMemory", you need to output:
1. The `memory` field with the new memory content

If the operation is "UpdateMemory", you need to output:
1. The `memory` field with the original memory content
2. The update_id field with the ID of the memory being updated
3. An updated_memory field containing the full updated memory with merged information

## Decision Guidelines

### AddMemory (New Information)
Add only when the retrieved fact introduces entirely new information not covered by existing memories.

**Example**:
- Existing Memory: `[{"id": "0", "text": "User is a software engineer"}]`
- Retrieved Fact: `["Name is John"]`
- Action: AddMemory with new ID

### UpdateMemory (Preserve + Extend)
Preserve existing information while adding new details. Combine information coherently without losing specificity or changing meaning.

**Critical Rules for UpdateMemory**:
- **Preserve timestamps and specific details** from the original memory
- **Maintain semantic accuracy** - don't generalize or change the meaning
- Only enhance when new information genuinely adds value without contradiction
- Only enhance when new information is **closely relevant** to existing memories
- Attend to novel information that deviates from existing memories and expectations
- Consolidate and compress redundant memories to maintain information-density; strengthen based on reliability and recency; maximize SNR by avoiding idle words

**Example**:
- Existing: `[{"id": "1", "text": "Caroline attended an LGBTQ support group meeting that she found emotionally powerful."}]`
- Retrieved: `["Caroline found the support group very helpful"]`
- Action: UpdateMemory to `"Caroline attended an LGBTQ support group meeting that she found emotionally powerful and very helpful."`

**When NOT to update**:
- Information is essentially the same: "likes pizza" vs "loves pizza"
- Updating would change the fundamental meaning
- New fact contradicts existing information (use AddMemory instead)
- New fact contains new events with timestamps that differ from existing facts. Since enhanced memories share timestamps with original facts, this would create temporal contradictions. Use AddMemory instead.

### SkipMemory (No Change)
Use when information already exists in sufficient detail or when new information doesn't add meaningful value.

## Key Principles

- Conservation First: Preserve all specific details, timestamps, and context
- Semantic Preservation: Never change the core meaning of existing memories
- Coherent Integration: Lets enhanced memories read naturally and logically
```

## Consolidation output schema
<a name="consolidation-output-schema"></a>

```
## Response Format

Return only this JSON structure, using double quotes for all keys and string values:
```json
[
  {
    "memory": {
      "fact": "<content>"
    },
    "operation": "<AddMemory_or_UpdateMemory>",
    "update_id": "<existing_id_for_UpdateMemory>",
    "updated_memory": {
      "fact": "<content>"
    }
  },
  ...
]
```

Only include entries with AddMemory or UpdateMemory operations. Return empty memory array if no changes are needed.
Do not return anything except the JSON format.
```

# User preference memory strategy
<a name="user-preference-memory-strategy"></a>

The `UserPreferenceMemoryStrategy` is designed to automatically identify and extract user preferences, choices, and styles from conversational data. This lets your agent to learn from interactions and builds a persistent, dynamic profile of each user over time.

 **Steps in the strategy** 

The user preference strategy includes the following steps:
+  **Extraction** – Identifies useful insights from short-term memory to place into long-term memory as memory records.
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.

**Note**  
The user preference strategy processes only `USER` and `ASSISTANT` role messages during extraction. For more information about roles in agent conversations, see [Conversational](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_Conversational.html).

 **Strategy output** 

The user preference strategy returns JSON objects with context, preference, and categories, making it easier to capture user choices and decision patterns.

 **Examples of insights captured by this strategy include:** 
+ A customer’s preferred shipping carrier or shopping brand.
+ A developer’s preferred coding style or programming language.
+ A user’s communication preferences, such as a formal or informal tone.

By leveraging this strategy, your agent can deliver highly personalized experiences, such as offering tailored recommendations, adapting its responses to a user’s style, and anticipating needs based on past choices. This creates a more relevant and effective conversational experience.

 **Default namespace** 

 `/strategy/{memoryStrategyId}/actors/{actorId}/` 

**Topics**
+ [System prompt for user preference memory strategy](memory-user-prompt.md)

# System prompt for user preference memory strategy
<a name="memory-user-prompt"></a>

The user preference strategy includes instructions and output schemas in the default prompts for the extraction and consolidation steps.

## Extraction instructions
<a name="user-preference-memory-extraction-instructions"></a>

```
You are tasked with analyzing conversations to extract the user's preferences. You'll be analyzing two sets of data:

<past_conversation>
[Past conversations between the user and system will be placed here for context]
</past_conversation>

<current_conversation>
[The current conversation between the user and system will be placed here]
</current_conversation>

Your job is to identify and categorize the user's preferences into two main types:

- Explicit preferences: Directly stated preferences by the user.
- Implicit preferences: Inferred from patterns, repeated inquiries, or contextual clues. Take a close look at user's request for implicit preferences.

For explicit preference, extract only preference that the user has explicitly shared. Do not infer user's preference.

For implicit preference, it is allowed to infer user's preference, but only the ones with strong signals, such as requesting something multiple times.
```

## Extraction output schema
<a name="extraction-output-schema"></a>

```
Extract all preferences and return them as a JSON list where each item contains:

1. "context": The background and reason why this preference is extracted.
2. "preference": The specific preference information
3. "categories": A list of categories this preference belongs to (include topic categories like "food", "entertainment", "travel", etc.)

For example:

[
  {
    "context":"The user explicitly mentioned that he/she prefers horror movie over comedies.",
    "preference": "Prefers horror movies over comedies",
    "categories": ["entertainment", "movies"]
  },
  {
    "context":"The user has repeatedly asked for Italian restaurant recommendations. This could be a strong signal that the user enjoys Italian food.",
    "preference": "Likely enjoys Italian cuisine",
    "categories": ["food", "cuisine"]
  }
]

Extract preferences only from <current_conversation>. Extract preference ONLY from the user messages. You should use assistant messages only as supporting context. Only extract user preferences with high confidence.

Maintain the original language of the user's conversation. If the user communicates in a specific language, extract and format the extracted information in that same language.

Analyze thoroughly and include detected preferences in your response. Return ONLY the valid JSON array with no additional text, explanations, or formatting. If there is nothing to extract, simply return empty list.
```

## Consolidation instructions
<a name="user-preference-memory-consolidation-instructions"></a>

```
# ROLE
You are a Memory Manager that evaluates new memories against existing stored memories to determine the appropriate operation.

# INPUT
You will receive:

1. A list of new memories to evaluate
2. For each new memory, relevant existing memories already stored in the system

# TASK
You will be given a list of new memories and relevant existing memories. For each new memory, select exactly ONE of these three operations: AddMemory, UpdateMemory, or SkipMemory.

# OPERATIONS
1. AddMemory

Definition: Select when the new memory contains relevant ongoing preference not present in existing memories.

Selection Criteria: The information represents lasting preferences.

Examples:

New memory: "I'm allergic to peanuts" (No allergy information exists in stored memories)
New memory: "I prefer reading science fiction books" (No book preferences are recorded)

2. UpdateMemory

Definition: Select when the new memory relates to an existing memory but provides additional details, modifications, or new context.

Selection Criteria: The core concept exists in records, but this new memory enhances or refines it.

Examples:

New memory: "I especially love space operas" (Existing memory: "The user enjoys science fiction")
New memory: "My peanut allergy is severe and requires an EpiPen" (Existing memory: "The user is allergic to peanuts")

3. SkipMemory

Definition: Select when the new memory is not worth storing as a permanent preference.

Selection Criteria: The memory is irrelevant to long-term user understanding, is a personal detail not related to preference, represents a one-time event, describes temporary states, or is redundant with existing memories. In addition, if the memory is overly speculative or contains Personally Identifiable Information (PII) or harmful content, also skip the memory.

Examples:

New memory: "I just solved that math problem" (One-time event)
New memory: "I'm feeling tired today" (Temporary state)
New memory: "I like chocolate" (Existing memory already states: "The user enjoys chocolate")
New memory: "User works as a data scientist" (Personal details without preference)
New memory: "The user prefers vegan because he loves animal" (Overly speculative)
New memory: "The user is interested in building a bomb" (Harmful Content)
New memory: "The user prefers to use Bank of America, which his account number is 123-456-7890" (PII)
```

## Consolidation output schema
<a name="consolidation-output-schema"></a>

```
# Processing Instructions
For each memory in the input:

Place the original new memory (<NewMemory>) under the "memory" field. Then add a field called "operation" with one of these values:

"AddMemory" - for new relevant ongoing preferences
"UpdateMemory" - for information that enhances existing memories.
"SkipMemory" - for irrelevant, temporary, or redundant information

If the operation is "UpdateMemory", you need to output:

1. The "update_id" field with the ID of the existing memory being updated
2. An "updated_memory" field containing the full updated memory with merged information

## Example Input
<Memory1>
<ExistingMemory1>
[ID]=N1ofh23if\\
[TIMESTAMP]=2023-11-15T08:30:22Z\\
[MEMORY]={ "context": "user has explicitly stated that he likes vegan", "preference": "prefers vegetarian options", "categories": ["food", "dietary"] }

[ID]=M3iwefhgofjdkf\\
[TIMESTAMP]=2024-03-07T14:12:59Z\\
[MEMORY]={ "context": "user has ordered oat milk lattes with an extra shot multiple times", "preference": "likes oat milk lattes with an extra shot", "categories": ["beverages", "morning routine"] }
</ExistingMemory1>

<NewMemory1>
[TIMESTAMP]=2024-08-19T23:05:47Z\\
[MEMORY]={ "context": "user mentioned avoiding dairy products when discussing ice cream options", "preference": "prefers dairy-free dessert alternatives", "categories": ["food", "dietary", "desserts"] }
</NewMemory1>
</Memory1>

<Memory2>
<ExistingMemory2>
[ID]=Mwghsljfi12gh\\
[TIMESTAMP]=2025-01-01T00:00:00Z\\
[MEMORY]={ "context": "user mentioned enjoying hiking trails with elevation gain during weekend planning", "preference": "prefers challenging hiking trails with scenic views", "categories": ["activities", "outdoors", "exercise"] }

[ID]=whglbidmrl193nvl\\
[TIMESTAMP]=2025-04-30T16:45:33Z\\
[MEMORY]={ "context": "user discussed favorite shows and expressed interest in documentaries about sustainability", "preference": "enjoys environmental and sustainability documentaries", "categories": ["entertainment", "education", "media"] }
</ExistingMemory2>

<NewMemory2>
[TIMESTAMP]=2025-09-12T03:27:18Z\\
[MEMORY]={ "context": "user researched trips to coastal destinations with public transportation options", "preference": "prefers car-free travel to seaside locations", "categories": ["travel", "transportation", "vacation"] }
</NewMemory2>
</Memory2>

<Memory3>
<ExistingMemory3>
[ID]=P4df67gh\\
[TIMESTAMP]=2026-02-28T11:11:11Z\\
[MEMORY]={ "context": "user has mentioned enjoying coffee with breakfast multiple times", "preference": "prefers starting the day with coffee", "categories": ["beverages", "morning routine"] }

[ID]=Q8jk12lm\\
[TIMESTAMP]=2026-07-04T19:45:01Z\\
[MEMORY]={ "context": "user has stated they typically wake up around 6:30am on weekdays", "preference": "has an early morning schedule on workdays", "categories": ["schedule", "habits"] }
</ExistingMemory3>

<NewMemory3>
[TIMESTAMP]=2026-12-25T22:30:59Z\\
[MEMORY]={ "context": "user mentioned they didn't sleep well last night and felt tired today", "preference": "feeling tired and groggy", "categories": ["sleep", "wellness"] }
</NewMemory3>
</Memory3>

## Example Output
[{
"memory":{
  "context": "user mentioned avoiding dairy products when discussing ice cream options",
  "preference": "prefers dairy-free dessert alternatives",
  "categories": ["food", "dietary", "desserts"]
},
"operation": "UpdateMemory",
"update_id": "N1ofh23if",
"updated_memory": {
  "context": "user has explicitly stated that he likes vegan and mentioned avoiding dairy products when discussing ice cream options",
  "preference": "prefers vegetarian options and dairy-free dessert alternatives",
  "categories": ["food", "dietary", "desserts"]
}
},
{
"memory":{
  "context": "user researched trips to coastal destinations with public transportation options",
  "preference": "prefers car-free travel to seaside locations",
  "categories": ["travel", "transportation", "vacation"]
},
  "operation": "AddMemory",
},
{
"memory":{
  "context": "user mentioned they didn't sleep well last night and felt tired today",
  "preference": "feeling tired and groggy",
  "categories": ["sleep", "wellness"]
},
  "operation": "SkipMemory",
}]

Like the example, return only the list of JSON with corresponding operation. Do NOT add any explanation.
```

# Summary strategy
<a name="summary-strategy"></a>

The `SummaryStrategy` is responsible for generating condensed, real-time summaries of conversations within a single session. It captures key topics, main tasks, and decisions, providing a high-level overview of the dialogue.

 **Steps in the strategy** 

The summary strategy includes the following steps:
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.

 **Strategy output** 

The summary strategy returns XML-formatted output, where each `<topic>` tag represents a distinct area of the user’s memory. XML lets multiple topics to be captured and organized in a single summary while preserving clarity.

A single session can have multiple summary chunks, each representing a portion of the conversation. Together, these chunks form the complete summary for the entire session.

These summary chunks can be retrieved using the [ListMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_ListMemoryRecords.html) operation with namespace filter, or you can also perform semantic search over the summary chunks using the [RetrieveMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_RetrieveMemoryRecords.html) operation to retrieve only the relevant summary chunks for your query.

 **Examples of insights captured by this strategy include:** 
+ A summary of a support interaction, such as "The user reported an issue with order \$1XYZ-123 , and the agent initiated a replacement."
+ The outcome of a planning session, like "The team agreed to move the project deadline to Friday."

By referencing this summary, an agent can quickly recall the context of a long or complex conversation without needing to re-process the entire history. This is essential for maintaining conversational flow and for efficiently managing the context window of the foundation model.

 **Default namespace** 

 `/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/` 

**Note**  
 `sessionId` is a required parameter for summary namespace since summaries are generated and maintained at session level.

**Topics**
+ [System prompt for summary strategy](memory-summary-prompt.md)

# System prompt for summary strategy
<a name="memory-summary-prompt"></a>

The semantic strategy includes instructions and an output schema in the default system prompt for a single consolidation step.

## Consolidation instructions
<a name="consolidation-instructions"></a>

There are no consolidation instructions for built-in summary strategy.

## Consolidation output schema
<a name="consolidation-output-schema"></a>

```
You are a summary generator. You will be given a text block, a concise global summary, and a detailed summary you previous generated.
<task>
- Given the contexts(e.g. global summary, detailed previous summary), your goal is to generate
(1) a concise global summary keeping in main target of the conversation, such as the task and the requirements.
(2) a detailed delta summary of the given text block, without repeating the historical detailed summary.
- The previous summary is a context for you to understand the main topics.
- You should only output the delta summary, not the whole summary.
- The generated delta summary should be as concise as possible.
</task>
<extra_task_requirements>
- Summarize with the same language as the given text block.
    - If the messages are in a specific language, summarize with the same language.
</extra_task_requirements>

When you generate global summary you ALWAYS follow the below guidelines:
<guidelines_for_global_summary>
- The global summary should be concise and to the point, only keep the most important information such as the task and the requirements.
- If there is no new high-level information, do not change the global summary. If there is new tasks or requirements, update the global summary.
- The global summary will be pure text wrapped by <global_summary></global_summary> tag.
- The global summary should be no exceed specified word count limit.
- Tracking the size of the global summary by calculating the number of words. If the word count reaches the limit, try to compress the global summary.
</guidelines_for_global_summary>

When you generate detailed delta summaries you ALWAYS follow the below guidelines:
<guidelines_for_delta_summary>
- Each summary MUST be formatted in XML format.
- You should cover all important topics.
- The summary of the topic should be place between <topic name="$TOPIC_NAME"></topic>.
- Only include information that are explicitly stated or can be logically inferred from the conversation.
- Consider the timestamps when you synthesize the summary.
- NEVER start with phrases like 'Here's the summary...', provide directly the summary in the format described below.
</guidelines_for_delta_summary>

The XML format of each summary is as it follows:

<existing_global_summary_word_count>
    $Word Count
</existing_global_summary_word_count>

<global_summary_condense_decision>
    The total word count of the existing global summary is $Total Word Count.
    The word count limit for global summary is $Word Count Limit.
    Since we exceed/do not exceed the word count limit, I need to condense the existing global summary/I don't need to condense the existing global summary.
</global_summary_condense_decision>

<global_summary>
    ...
</global_summary>

<delta_detailed_summary>
    <topic name="$TOPIC_NAME">
        ...
    </topic>
    ...
</delta_detailed_summary>
```

**Note**  
Built-in strategies may use cross-region inference for optimal performance and availability.

Built-in strategies may use [cross-region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html) . Bedrock will automatically select the optimal region within your geography to process your inference request, maximizing available compute resources and model availability, and providing the best customer experience. There’s no additional cost for using cross-region inference.

# Episodic memory strategy
<a name="episodic-memory-strategy"></a>

 **Episodic memory** captures meaningful slices of user and system interactions so applications can recall context in a way that feels focused and relevant. Instead of storing every raw event, it identifies important moments, summarizes them into compact records, and organizes them so the system can retrieve what matters without noise. This creates a more adaptive and intelligent experience by allowing models to understand how context has evolved over time.

Its strength comes from having structured context that spans many interactions, while remaining efficient to store, search, and update. Developers get a balance of freshness, accuracy, and long term continuity without needing to engineer their own summarization pipelines.

 **Reflections** build on episodic records by analyzing past episodes to surface insights, patterns, and higher level conclusions. Instead of simply retrieving what happened, reflections help the system understand why certain events matter and how they should influence future behavior. They turn raw experience into guidance the application can use immediately, giving models a way to learn from history.

Their value comes from lifting information above individual moments, or episodes and creating durable knowledge that improves decision making, personalization, and consistency. This helps applications avoid repeating mistakes, adapt more quickly to user preferences, and behave in a way that feels coherent over long periods.

Customers should use episodic memory in any scenario where understanding a sequence of past interactions improves quality, as well as scenarios where long term improvement matters. Ideal use cases include customer support conversations, agent driven workflows, code assistants that rely on session history, personal productivity tools, troubleshooting or diagnostic flows, and applications that need context grounded in real prior events rather than static profiles.

When you invoke the episodic strategy, AgentCore automatically detects episode completion within conversations and processes events into structured episode records.

 **Steps in the strategy** 

The episodic memory strategy includes the following steps:
+  **Extraction** – Analyzes in-progress episode and determine if episode is complete.
+  **Consolidation** – When an episode is complete, combines extractions into single episode.
+  **Reflection** – Insights are generated across episodes.

 **Strategy output** 

The episodic memory strategy returns XML-formatted output for both episodes and reflections. Each episode is broken down into a situation, intent, assessment, justification, and episode-level reflection. As the interaction proceeds, the episode is analyzed turn-by-turn. You can use this information to better understand the order of operations and tool use.

 **Examples of episodes captured by this strategy** 
+ A code deployment interaction where the agent selected specific tools, encountered an error, and successfully resolved it using an alternative approach.
+ An appointment rescheduling task that captured the user’s intent, the agent’s decision to use a particular tool, and the successful outcome.
+ A data processing workflow that documented which parameters led to optimal performance for a specific data type.

The episodic strategy includes memory extraction and consolidation steps (shared with other strategies). In addition, the episodic strategy also generates reflections, which analyze episodes in the background as interactions take place. Reflections consolidate across multiple episodes to extract broader insights that identify successful strategies and patterns, potential improvements, common failure modes, and lessons learned that span multiple interactions.

 **Examples of reflections include** 
+ Identifying which tool combinations consistently lead to successful outcomes for specific task types.
+ Recognizing patterns in failed attempts and the approaches that resolved them.
+ Extracting best practices from multiple successful episodes with similar scenarios.

The following image schematizes the episodic memory strategy:

![\[Schema of episodic memory strategy.\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/memory/episodic-memory-strategy.png)


By referencing stored episodes, your agent can retrieve relevant past experiences through semantic search and review reflections to avoid repeating failed approaches and to adapt successful strategies to new contexts. This strategy is useful for agents that benefit from identifying patterns, need to continually update information, maintain consistency across interactions, and require context and reasoning rather than static knowledge to make decisions.

**Topics**
+ [Namespaces](#episodic-memory-strategy-namespaces)
+ [How to best retrieve episodes to improve agentic performance](#memory-episodic-retrieve-episodes)
+ [System prompts for episodic memory strategy](memory-episodic-prompt.md)

## Namespaces
<a name="episodic-memory-strategy-namespaces"></a>

When you create a memory with the episodic strategy, you define namespaces under which to store episodes and reflections.

**Note**  
Regardless of the namespace you choose to store episodes in, episodes are always created from a single session.

Episodes are commonly stored in one of the following namespaces:
+  `/strategy/{memoryStrategyId}/` – Store episodes at the strategy level. Episodes that have different actors or that come from different sessions, but that belong to the same strategy, are stored in the same namespace.
+  `/strategy/{memoryStrategyId}/actor/{actorId}/` – Store all episodes at the actor level. Episodes that come from different sessions, but that belong to the same actor, are stored in the same namespace.
+  `/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/` – Store all episodes at the session level. Episodes that belong to the same session are stored in the same namespace.

Reflections must match the same namespace pattern as episodes, but reflections can be less nested. For example, if your episodic namespace is `/strategy/{memoryStrategyId}/actor/{actorId}/` , you can use the following namespaces for reflections:
+  `/strategy/{memoryStrategyId}/actor/{actorId}/` – Insights will be extracted across all episodes for an actor.
+  `/strategy/{memoryStrategyId}/` – Insights will be extracted across all episodes and across all actors for the strategy.

**Important**  
Because reflections can span multiple actors within the same memory resource, consider the privacy implications of cross-actor analysis when retrieving reflections. Consider using [guardrails in conjunction with memory](https://github.com/awslabs/amazon-bedrock-agentcore-samples/tree/main/01-tutorials/04-AgentCore-memory/03-advanced-patterns/01-guardrails-integration) or reflecting at the actor level if this is a concern.

## How to best retrieve episodes to improve agentic performance
<a name="memory-episodic-retrieve-episodes"></a>

There are multiple ways to utilize episodic memory:
+ Within your agent code
  + When starting a new task, configure your agent to perform a query asking for the most similar episodes and reflections. Also query for relevant episodes and reflection subsequently based on some logic.
  + When creating short-term memories with [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CreateEvent.html) , including `TOOL` results will yield optimal results.
  + For similar successful episodes, linearize the turns within the episode and feed only this to the agent so it focuses on the main steps
+ Manually
  + Look at your reflections or unsuccessful episodes, and consider whether some issues could be solved with updates to your agent code

When performing retrievals, note that memory records are indexed based on "intent" for episodes and "use case" for reflection.

For other memory strategies, memory records are generated on a regular basis throughout an interaction. Episodic memory records, by contrast, will only be generated once AgentCore Memory detects a completed episode. If an episode is not complete, it will take longer to generate because the systems waits to see if the conversation is continued.

# System prompts for episodic memory strategy
<a name="memory-episodic-prompt"></a>

The episodic memory strategy includes instructions and output schemas in the default prompts for episode extraction, episode consolidation, and reflection generation.

**Topics**
+ [Episode extraction instructions](#episode-extraction-instructions)
+ [Episode extraction output schema](#episode-extraction-output-schema)
+ [Episode consolidation instructions](#episode-generation-instructions)
+ [Episode consolidation output schema](#episode-generation-output-schema)
+ [Reflection generation instructions](#reflection-generation-instructions)
+ [Reflection generation output schema](#reflection-generation-output-schema)

## Episode extraction instructions
<a name="episode-extraction-instructions"></a>

```
You are an expert conversation analyst. Your task is to analyze multiple turns of conversation between a user and an AI assistant, focusing on tool usage, input arguments, and reasoning processes.

# Analysis Framework:

## 1. Context Analysis
- Examine all conversation turns provided within <conversation></conversation> tags
- Each turn will be marked with <turn_[id]></turn_[id]> tags
- Identify the circumstances and context that the assistant is responding to in each interaction
- Try to identify or recover the user's overall objective for the entire conversation, which may go beyond the given conversation turns
- When available, incorporate context from <previous_[k]_turns></previous_[k]_turns> tags to understand the user's broader objectives from provided conversation history

## 2. Assistant Analysis (Per Turn)
For EACH conversation turn, analyze the assistant's approach by identifying:
- **Context**: The circumstances and situation the assistant is responding to, and how the assistant's goal connects to the user's overall objective (considering previous interactions when available)
- **Intent**: The assistant's primary goal for this specific conversation turn
- **Action**: Which specific tools were used with what input arguments and sequence of execution. If no tools were used, describe the concrete action/response the assistant took.
- **Reasoning**: Why these tools were chosen, how arguments were determined, and what guided the decision-making process. If no tools were used, explain the reasoning behind the assistant's action/response.

## 3. Outcome Assessment (Per Turn)
For EACH turn, using the next turn's user message:
- Determine whether the assistant successfully achieved its stated goal
- Evaluate the effectiveness of the action taken —- what worked well and what didn't
- Assess whether the user's overall objective has been satisfied, remains in progress, or is evolving

**Do not include any PII (personally identifiable information) or user-specific data in your output.**
```

## Episode extraction output schema
<a name="episode-extraction-output-schema"></a>

```
You MUST provide a separate <summary> block for EACH conversation turn. Number them sequentially:

<summary>
<summary_turn>
<turn_id>
The id of the turn that matches the input, e.g. 0, 1, 2, etc.
</turn_id>
<situation>
A brief description of the circumstances and context that the assistant is responding to in this turn, including the user's overall objective (which may go beyond this specific turn) and any relevant history from previous interactions
</situation>
<intent>
The assistant's primary goal for this specific interaction—what the assistant aimed to accomplish in this turn
</intent>
<action>
Briefly describe which actions were taken or specific tools were used, what input arguments or parameters were provided to each tool.
</action>
<thought>
Briefly explain why these specific tools or actions were chosen for this task, how the input arguments were determined (whether from the user's explicit request or inferred from context), what constraints or requirements influenced the approach, and what information guided the decision-making process
</thought>
<assessment_assistant>
Start with Yes or No — Whether the assistant successfully achieved its stated goal for this turn
Then add a brief justification based on the relevant context
</assessment_assistant>
<assessment_user>
Yes or No - Whether this turn represents the END OF THE CONVERSATION EPISODE (the user's current inquiry has concluded). Then add a brief explanation by considering messages in the next turns: 1. If this turn represents the END OF THE CONVERSATION EPISODE (the user's current inquiry has concluded), then Yes (it is a clear signal that the user's inquiry has concluded). 2. If the user is continuing with new questions or shifting to other task, then Yes (it is a clear signal that the user is finished with the current task and is ready to move on to the next task). 3. If the user is asking for clarification or more information for the current task, indicating that the user's inquiry is in progress, then No (it is a clear signal that the user's inquiry is not yet concluded). 4. If there is no next turn and there is no clear signal showing that the user's inquiry has concluded, then No.
</assessment_user>
</summary_turn>

<summary_turn>
<turn_id>...</turn_id>
<situation>...</situation>
<intent>...</intent>
<action>...</action>
<thought>...</thought>
<assessment_assistant>...</assessment_assistant>
<assessment_user>...</assessment_user>
</summary_turn>

... continue for all turns ...

<summary_turn>
<turn_id>...</turn_id>
<situation>...</situation>
<intent>...</intent>
<action>...</action>
<thought>...</thought>
<assessment_assistant>...</assessment_assistant>
<assessment_user>...</assessment_user>
</summary_turn>
</summary>

Attention: Only output 1-2 sentences for each field. Be concise and avoid lengthy explanations.
Make sure the number of <summary_turn> is the same as the number of turns in the conversation.<
```

## Episode consolidation instructions
<a name="episode-generation-instructions"></a>

```
You are an expert conversation analyst. Your task is to analyze and summarize conversations between a user and an AI assistant provided within <conversation_turns></conversation_turns> tags.

# Analysis Objectives:
- Provide a comprehensive summary covering all key aspects of the interaction
- Understand the user's underlying needs and motivations
- Evaluate the effectiveness of the conversation in meeting those needs

# Analysis Components:
Examine the conversation through the following dimensions:
**Situation**: The context and circumstances that prompted the user to initiate this conversation—what was happening that led them to seek assistance?
**Intent**: The user's primary goal, the problem they wanted to solve, or the outcome they sought to achieve through this interaction.
**Assessment**: A definitive evaluation of whether the user's goal was successfully achieved.
**Justification**: Clear reasoning supported by specific evidence from the conversation that explains your assessment.
**Reflection**: Key insights from the sequence of turns, focusing on patterns in tool usage, reasoning processes, and decision-making. Identify effective tool selection and argument patterns, reasoning or tool choices to avoid, and actionable recommendations for similar situations.
```

## Episode consolidation output schema
<a name="episode-generation-output-schema"></a>

```
# Output Format:

Provide your analysis using the following structured XML format:
<summary>
<situation>
Brief description of the context and circumstances that prompted this conversation—what led the user to seek assistance at this moment
</situation>
<intent>
The user's primary goal, the specific problem they wanted to solve, or the concrete outcome they sought to achieve
</intent>
<assessment>
[Yes/No] — Whether the user's goal was successfully achieved
</assessment>
<justification>
Brief justification for your assessment based on key moments from the conversation
</justification>
<reflection>
Synthesize key insights from the sequence of turns, focusing on patterns in tool usage, reasoning processes, and decision-making that led to success or failure. Identify effective tool selection and argument patterns that worked well, reasoning or tool choices that should be avoided.
</reflection>
</summary>
```

## Reflection generation instructions
<a name="reflection-generation-instructions"></a>

```
You are an expert at extracting actionable insights from agent task execution trajectories to build reusable knowledge for future tasks.

# Task:
Analyze the provided episodes and their reflection knowledge, and synthesize new reflection knowledge that can guide future scenarios.

# Input:
- **Main Episode**: The primary trajectory to reflect upon (context, goal, and execution steps)
- **Relevant Episodes**: Relevant trajectories that provide additional context and learning opportunities
- **Existing Reflection Knowledge**: Previously generated reflection insights from related episodes (each with an ID) that can be synthesized or expanded upon

# Reflection Process:

## 1. Pattern Identification
- First, review the main episode's user_intent (goal), description (context), turns (actions and thoughts), and reflection/finding (lessons learned)
- Then, review the relevant episodes and identify NEW patterns across episodes
- Review existing reflection knowledge to understand what's already been learned
- When agent system prompt is available, use it to understand the agent's instructions, capabilities, constraints, and requirements
- Finally, determine if patterns update existing knowledge or represent entirely new insights

## 2. Knowledge Synthesis
For each identified pattern, create a reflection entry with:

### Operator
Specify one of the following operations:
- **add**: This is a completely new reflection that addresses patterns not covered by existing reflection knowledge. Do NOT include an <id> field.
- **update**: This reflection is an updated/improved version of an existing reflection from the input. ONLY use "update" when the new pattern shares the SAME core concept or title as an existing reflection. Include the existing reflection's ID in the <id> field.
    - Length constraint: If updating would make the combined use_cases + hints exceed 300 words, create a NEW reflection with "add" instead. Split the pattern into a more specific, focused insight rather than growing the existing one indefinitely.

### ID (only for "update" operator)
If operator is "update", specify the ID of the existing reflection that this new reflection expands upon. This ID comes from the existing reflection knowledge provided in the input.

### Title
Concise, descriptive name for the insight (e.g., "Error Recovery in API Calls", "Efficient File Search Strategies").
    - When updating, keep the same title or a very similar variant to indicate it's the same conceptual pattern.
    - When adding due to length constraint: Use a more specific variant of the title that narrows the scope (e.g., "Error Recovery in API Calls" → "Error Recovery in API Rate Limiting Scenarios")

### Applied Use Cases
Briefly describe when this applies, including:
- The types of goals (based on episode user_intents) where this insight helps
- The problems or challenges this reflection addresses
- Trigger conditions that signal when to use this knowledge

**When updating an existing reflection (within length limit):** Summarize both the original use cases and the new ones into create a comprehensive view.

### Concrete Hints
Briefly describe actionable guidance based on the identified patterns. Examples to include:
- Tool selection and usage patterns from successful episodes
- What worked well and what to avoid (from failures)
- Decision criteria for applying these patterns
- Specific reasoning details and context that explain WHY these patterns work

**When updating an existing reflection (within length limit):** If the new episodes reveal NEW hints, strategies, or patterns not in the existing reflection, ADD them to this section. Summarize both the original hints and the new ones into create a comprehensive view.

### Confidence Score
Score from 0.1 to 1.0 (0.1 increments) indicating how useful this will be for future agents:
- Higher (0.8-1.0): Clear actionable patterns that consistently led to success/failure
- Medium (0.4-0.7): Useful insights but context-dependent or limited evidence
- Lower (0.1-0.3): Tentative patterns that may not generalize well

When updating, adjust the confidence score based on the additional evidence from new episodes.

## 3. Synthesis Guidelines
- **When updating (within length limits)**:
  - Keep the update concise - integrate new insights efficiently without verbose repetition
  - DO NOT lose valuable information from the original reflection
- **When a reflection becomes too long**: Split it into more specific, focused reflections
    - Each new reflection should be self-contained and focused on a specific sub-pattern
- Focus on **transferable** knowledge, not task-specific details
- Emphasize **why** certain approaches work, not just what was done
- Include both positive patterns (what to do) and negative patterns (what to avoid)
- If the existing reflection knowledge already covers the patterns well and no new insights emerge, generate fewer or no new reflections
```

## Reflection generation output schema
<a name="reflection-generation-output-schema"></a>

```
<attention>
Aim for high-quality reflection entries that either add new learnings or update existing reflection knowledge.
    - Keep reflections focused and split them into more specific patterns when they grow too long.
    - Keep the use_cases and hints focused: Aim for 100-200 words.
    - If it's growing beyond this, consider if you should create a new, more specific reflection instead.
</attention>

# Output Format:

<reflections>
<reflection>
<operator>[add or update]</operator>
<id>[ID of existing reflection being expanded - only include this field if operator is "update"]</id>
<title>[Clear, descriptive title - keep same/similar to original when updating]</title>
<use_cases>
[Briefly describe the types of goals (from episode user_intents), problems addressed, trigger conditions. When updating: combine original use cases and new ones from recent episodes]
</use_cases>
<hints>
[Briefly describe tool usage patterns, what works, what to avoid, decision criteria, reasoning details. When updating: combine original hints and new insights from recent episodes]
</hints>
<confidence>[0.1 to 1.0]</confidence>
</reflection>
</reflections>
```

# Customize a built-in strategy or create your own strategy
<a name="memory-custom-strategy"></a>

For advanced or domain-specific use cases, you can create a built-in with overrides strategy to gain fine-grained control over the long-term memory process. Built-in with overrides strategies let you override the default behavior of the built-in strategies by providing your own instructions and selecting a specific foundation model for the extraction and consolidation steps.

## When to use a built-in with overrides strategy
<a name="when-to-use-custom-strategies"></a>

Choose a built-in with overrides strategy when you need to tailor the memory logic to your specific requirements. Common use cases include:
+  **Domain-specific extraction:** Constraining the strategy to extract only certain types of information. For example, capturing a user’s food and dietary preferences while ignoring their preferences for clothing.
+  **Controlling granularity:** Adjusting the level of detail in the extracted memory. For example, instructing the summary strategy to only save high-level key points rather than a detailed narrative.
+  **Model selection:** Using a specific foundation model that is better suited for your particular domain or task, such as a model fine-tuned for financial or legal text.

## How to customize a strategy
<a name="how-to-customize-strategy"></a>

To override a built-in strategy with a custom configuration, you specify the configuration when you use the [CreateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CreateMemory.html) operation or the [UpdateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_UpdateMemory.html) operation. You can override the following:
+ The instructions in the system prompt (however, the output schema remains the same). To create an effective custom prompt, you should first understand the default prompts. For more information, see the system prompt section for each [built-in strategy](built-in-strategies.md).
+ The Amazon Bedrock model with which to invoke the prompt. For more information, see [Add or remove access to Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html) . For more information about obtaining sufficient Amazon Bedrock capacity, see [Amazon Bedrock capacity for built-in with overrides strategies](bedrock-capacity.md).

The following table shows the steps you can override for each memory strategy:


| Strategy | Steps you can override | 
| --- | --- | 
|  Semantic  |  Extraction Consolidation  | 
|  Summary  |  Consolidation  | 
|  User preference  |  Extraction Consolidation  | 
|  Episodic  |  Extraction Consolidation Reflection  | 

**Note**  
You can also override with a self-managed strategy. For more information, see [Self-managed strategy](memory-self-managed-strategies.md).

## How to customize a strategy
<a name="customize-strategy-example"></a>

For example, you can modify the extraction instruction for semantic memory strategy to add new instructions when customizing it.

 **Built-in semantic memory strategy instructions** 

The following shows the built-in instructions for the semantic memory strategy:

```
You are a long-term memory extraction agent supporting a lifelong learning system. Your
task is to identify and extract meaningful information about the users from a given
list of messages.

Analyze the conversation and extract structured information about the user according to
the schema below. Only include details that are explicitly stated or can be logically
inferred from the conversation.

- Extract information ONLY from the user messages. You should use assistant messages
only as supporting context.
- If the conversation contains no relevant or noteworthy information, return an empty
list.

- Do NOT extract anything from prior conversation history, even if provided. Use it
solely for context.
- Do NOT incorporate external knowledge.
- Avoid duplicate extractions.

IMPORTANT: Maintain the original language of the user's conversation. If the user
communicates in a specific language, extract and format the extracted information
in that same language.
```

You could append a new rule to these instructions, such as:

```
- Focus exclusively on extracting facts related to travel and booking preferences.
```

or edit an existing rule such as:

```
IMPORTANT: Always extract memories in English irrespective of the original language of the user's conversation.
```

## Best practices for customization
<a name="best-practices-customization"></a>
+  **Build upon existing instructions:** We strongly recommend you use the built-in strategies instructions as a starting point. The base structure and instructions are critical to the memory functionality. You should add your task-specific guidance to the existing instructions rather than writing entirely new ones from scratch.
+  **Provide clear instructions:** The content of `appendToPrompt` replaces the default instructions in the system prompt. Make sure your instructions are logical and complete, as empty or poorly formed instructions can lead to undesirable results.

## Important considerations
<a name="important-considerations"></a>

To maintain the reliability of the memory pipeline, please adhere to the following guidelines:
+  **Do not modify schemas in prompts:** You should only add or modify instructions that guide *how* memories are extracted or consolidated. Do not attempt to alter the conversation or memory schema definitions within the prompt itself, as this can cause unexpected failures.
+  **Do not rename consolidation operations:** When customizing a consolidation prompt, do not change the operation names (e.g., `AddMemory` , `UpdateMemory` ). Altering these names will cause the long-term memory pipeline to fail.
+  **Output schema is not editable:** built-in with overrides strategies do not let you change the final output schema of the extracted or consolidated memory. The output schema that will be added to the system prompt for the built-in with overrides strategy will be same as the built-in strategies. For information about the output schemas, see the following:
  +  [System prompt for semantic memory strategy](memory-system-prompt.md) 
  +  [System prompt for user preference memory strategy](memory-user-prompt.md) 
  +  [System prompt for summary strategy](memory-summary-prompt.md) 

For full control over the end-to-end memory process, including the output schema, see [Self-managed strategy](memory-self-managed-strategies.md).

## Execution role
<a name="execution-role"></a>

When using built-in with overrides strategies you are also required to provide a `memoryExecutionRoleArn` in the `CreateMemory` API. Bedrock Amazon Bedrock AgentCore will assume this role to call the bedrock models in your AWS account for extraction and consolidation.

**Note**  
When using built-in with overrides strategies, the LLM usage for extraction and consolidation will be charged separately to your AWS account, and additional charges may apply.

# Self-managed strategy
<a name="memory-self-managed-strategies"></a>

A self-managed strategy in Amazon Bedrock AgentCore Memory gives you complete control over your memory extraction and consolidation pipelines. With a self-managed strategy, you can build custom memory processing workflows while leveraging Amazon Bedrock AgentCore for storage and retrieval.

A self-managed strategy in combination with the batch operations ( [BatchCreateMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_BatchCreateMemoryRecords.html) , [BatchUpdateMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_BatchUpdateMemoryRecords.html) , [BatchDeleteMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_BatchDeleteMemoryRecords.html) ), let you directly ingest these extracted records into Amazon Bedrock AgentCore memory for search capabilities.

With self-managed strategies, you can:
+ Control pipeline invocation through configurable triggers
+ Integrate with external processing systems
+ Implement custom extraction and consolidation algorithms
+ Invoke any preferred model for extraction and consolidation
+ Define custom memory record schemas, namespaces, and so on.
+ Ingest extracted records into Amazon Bedrock AgentCore long term memory

**Topics**
+ [Create and use a self-managed strategy](#use-self-managed-strategy)
+ [Prerequisites](#prerequisites)
+ [Set up the infrastructure](#setting-up-infrastructure)
+ [Create a self-managed strategy](#creating-self-managed-strategy)
+ [Understanding payload delivery](#understanding-payload-delivery)
+ [Build your custom pipeline](#building-custom-pipeline)
+ [Test your implementation](#testing-implementation)
+ [Best practices](#best-practices)

## Create and use a self-managed strategy
<a name="use-self-managed-strategy"></a>

Self-managed strategies follow a five-step process from trigger configuration to memory record storage.

1.  **Configure triggers** : Define trigger conditions (message count, idle timeout, token count) that invoke your pipeline based on short-term memory events

1.  **Receive notifications and payload delivery** : Amazon Bedrock AgentCore publishes notifications to your SNS topic and delivers conversation data to your S3 bucket when trigger conditions are met

1.  **Extract memory records** : Your custom pipeline retrieves the payload and applies extraction logic to identify relevant memories

1.  **Consolidate memory records** : Process extracted memories to remove duplicates and resolve conflicts with existing records

1.  **Store memory records** : Use batch APIs to store processed memory records back into Amazon Bedrock AgentCore long-term memory

## Prerequisites
<a name="prerequisites"></a>

Before setting up self-managed strategies, verify you have:
+ An AWS account with appropriate permissions
+ Amazon Bedrock AgentCore access
+ Basic understanding of AWS IAM, Amazon S3, and Amazon SNS

## Set up the infrastructure
<a name="setting-up-infrastructure"></a>

Create the required AWS resources including S3 bucket, SNS topic, and IAM role that Amazon Bedrock AgentCore needs to access your resources.

### Step 1: Create an S3 bucket
<a name="create-s3-bucket"></a>

Create an S3 bucket in your account where Amazon Bedrock AgentCore will deliver batched event payloads.

**Important**  
Configure a lifecycle policy to automatically delete objects after processing to control costs.

### Step 2: Create an SNS topic
<a name="create-sns-topic"></a>

Create an SNS topic for job notifications. Use FIFO topics if processing order within sessions is important for your use case.

### Step 3: Create an IAM role
<a name="create-iam-role"></a>

Create an IAM role that Amazon Bedrock AgentCore can assume to access your resources.

 **Trust policy** 

Use the following trust policy:

```
{
    "Version": "2012-10-17 ",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "bedrock-agentcore.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

 **Permissions policy** 

Use the following permissions policy:

```
{
    "Version": "2012-10-17 ",
    "Statement": [
        {
            "Sid": "S3PayloadDelivery",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::your-agentcore-payloads-bucket",
                "arn:aws:s3:::your-agentcore-payloads-bucket/*"
            ]
        },
        {
            "Sid": "SNSNotifications",
            "Effect": "Allow",
            "Action": [
                "sns:GetTopicAttributes",
                "sns:Publish"
            ],
            "Resource": "arn:aws:sns:us-east-1:123456789012:agentcore-memory-jobs"
        }
    ]
}
```

 **Additional KMS permissions (if using encrypted resources)** 

If you use encrypted resources, add the following KMS permissions:

```
{
    "Sid": "KMSPermissions",
    "Effect": "Allow",
    "Action": [
        "kms:GenerateDataKey",
        "kms:Decrypt"
    ],
    "Resource": "arn:aws:kms:us-east-1:123456789012:key/your-key-id"
}
```

## Create a self-managed strategy
<a name="creating-self-managed-strategy"></a>

Use the Amazon Bedrock AgentCore control plane APIs to create or update an AgentCore Memory with self-managed strategies.

### Required permissions
<a name="required-permissions"></a>

Your IAM user or role needs:
+ bedrock-agentcore:\$1 permissions
+  `iam:PassRole` permission for the execution role

### Create an AgentCore Memory with a self-managed strategy
<a name="create-memory-with-strategy"></a>

Use the AWS SDK `CreateMemory` operation to create AgentCore Memory that has a self-managed strategy.

```
aws bedrock-agentcore-control create-memory \
  --name "MyCustomMemory" \
  --description "Memory with self-managed extraction strategy" \
  --memory-execution-role-arn "arn:aws:iam::123456789012:role/AgentCoreMemoryRole" \
  --event-expiry-duration 90 \
  --memory-strategies '[
    {
      "customMemoryStrategy": {
        "name": "SelfManagedExtraction",
        "description": "Custom extraction strategy",
        "configuration": {
          "selfManagedConfiguration": {
            "triggerConditions": [
              {
                "messageBasedTrigger": {
                  "messageCount": 6
                }
              },
              {
                "tokenBasedTrigger": {
                  "tokenCount": 1000
                }
              },
              {
                "timeBasedTrigger": {
                  "idleSessionTimeout": 30
                }
              }
            ],
            "historicalContextWindowSize": 2,
            "invocationConfiguration": {
              "payloadDeliveryBucketName": "your-agentcore-payloads-bucket",
              "topicArn": "arn:aws:sns:us-east-1:123456789012:agentcore-memory-jobs"
            }
          }
        }
      }
    }
  ]'
```

## Understanding payload delivery
<a name="understanding-payload-delivery"></a>

When trigger conditions are met, Amazon Bedrock AgentCore sends notifications and payloads using specific schemas.

### SNS notification message
<a name="sns-notification-message"></a>

```
{
  "jobId": "unique-job-identifier",
  "s3PayloadLocation": "s3://bucket/path/to/payload.json",
  "memoryId": "your-memory-id",
  "strategyId": "your-strategy-id"
}
```

### S3 payload structure
<a name="s3-payload-structure"></a>

```
{
  "requestId": "request-identifier",
  "accountId": "123456789012",
  "memoryId": "your-memory-id",
  "actorId": "user-or-agent-id",
  "sessionId": "conversation-session-id",
  "strategyId": "your-strategy-id",
  "startingTimestamp": 1634567890,
  "endingTimestamp": 1634567920,
  "currentContext": [
    {
      "role": "USER",
      "content": {
        "text": "User message content"
      }
    },
    {
      "role": "ASSISTANT",
      "content": {
        "text": "Assistant response"
      }
    }
  ],
  "historicalContext": [
    {
      "role": "USER",
      "content": {
        "text": "User message content"
      }
    },
    {
      "role": "ASSISTANT",
      "content": {
        "text": "Previous assistant response"
      }
    },
    {
      "blob": "{}",
    }
  ]
}
```

## Build your custom pipeline
<a name="building-custom-pipeline"></a>

This section demonstrates one approach to building a self-managed memory processing pipeline using AWS Lambda and SQS. This is just one example - you can implement your pipeline using any compute platform (EC2, ECS, Fargate), logic and processing framework that meets your requirements.

### Step 1: Set up compute
<a name="set-up-compute"></a>

1. Create an SQS queue and subscribe it to your SNS topic

1. Create an AWS Lambda function to process notifications

1. Configure Lambda execution role permissions

### Step 2: Process the pipeline
<a name="processing-pipeline"></a>

The following example pipeline consists of four main components:

1. Notification handling - Processing SNS notifications and downloading S3 payloads

1. Memory extraction - Using bedrock models to extract relevant information from conversations

1. Memory consolidation - Deduplicating and merging extracted memories with existing records

1. Batch ingestion - Storing processed memories back into Amazon Bedrock AgentCore using batch APIs

## Test your implementation
<a name="testing-implementation"></a>

1.  **Create events** : Use the Amazon Bedrock AgentCore APIs to create conversation events ( `CreateEvent` )

1.  **Monitor notifications** : Verify that your SNS topic receives notifications when triggers are met

1.  **Validate processing** : Check that your Lambda function processes payloads correctly and extracts memory records

1.  **Verify ingestion** : Use `list-memory-records` to confirm extracted memories are stored

### Example: Creating test events
<a name="creating-test-events"></a>

```
aws bedrock-agentcore create-event \
  --memory-id "your-memory-id" \
  --actor-id "test-user" \
  --session-id "test-session-1" \
  --event-timestamp "2024-01-15T10:00:00Z" \
  --payload '[{
    "conversational": {
      "content": {"text": "I prefer Italian restaurants with outdoor seating"},
      "role": "USER"
    }
  }]'
```

### Example: Retrieving memory records
<a name="retrieving-memory-records"></a>

```
# List records by namespace
aws bedrock-agentcore list-memory-records \
  --memory-id "your-memory-id" \
  --namespace "/" # lists all records that match the namespace prefix
```

## Best practices
<a name="best-practices"></a>

Follow these best practices for performance, reliability, cost optimization, and security when implementing self-managed strategies.

### Performance and reliability
<a name="performance-reliability"></a>
+  **SLA sharing** : Long-term memory record generation SLA is shared between Amazon Bedrock AgentCore and your self-managed pipeline
+  **Error handling** : Implement proper retry logic and dead letter queues for failed processing
+  **Monitoring** : Set up CloudWatch logs, metrics, alarms for debugging and processing failures and latency. Also, check vended logs from Amazon Bedrock AgentCore for payload delivery failures

### Cost optimization
<a name="cost-optimization"></a>
+  **S3 lifecycle policies** : Configure automatic deletion of processed payloads to control storage costs
+  **Right-sizing** : Choose appropriate compute memory and timeout settings based on your processing requirements

### Processing considerations
<a name="processing-considerations"></a>
+  **Trigger optimization** : Configure trigger conditions based on your use case requirements - balance between processing efficiency and memory freshness by considering your application’s tolerance for latency versus processing costs.
+  **FIFO topics** : Use FIFO SNS topics when session ordering is critical (e.g., for summarization workflows)
+  **Memory consolidation** : Implement deduplication logic to prevent storing redundant or conflicting memory records, which reduces storage costs and improves retrieval accuracy
+  **Memory record organization** : Always include meaningful namespaces and strategy IDs when ingesting records to enable efficient categorization, filtering, and retrieval of memory records

### Security
<a name="security"></a>
+  **Least privilege** : Grant minimal required permissions to all IAM roles
+  **Encryption** : Use KMS encryption for S3 buckets and SNS topics containing sensitive data

# Memory organization in AgentCore Memory
<a name="memory-organization"></a>

You can set how short-term and long-term memories are organized in an AgentCore Memory. This lets you isolate memories by session and by actor. For long-term memory, you can also set a namespace to organize the extracted memories for a memory strategy.
+  **Actor** – Refers to entities such as end users or agent/user combinations. For example, in a coding support chatbot, the actor is usually the developer asking questions. Using the actor ID helps the system know which user the memory belongs to, keeping each user’s data separate and organized.
+  **Session** – A single conversation or interaction period between the user and the AI agent. It groups all related messages and events that happen during that conversation.
+  **Strategy** (Long-term memory only) – Shows which long-term memory strategy is being used. This strategy identifier is auto-generated when you [create](memory-create-a-memory-store.md) an AgentCore Memory.

## Short-term memory organization
<a name="short-term-memory-organization"></a>

When you create a short term memory event with [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html) , you specify a session ID ( `sessionId` ) and an actor ID ( `actorId` ) that uniquely identify the session and actor for the event. Later, you can retrieve events for a user or session by using [short-term memory operations](using-memory-short-term.md).

For example code, see [Step 3: Capture the conversation history](memory-customer-scenario.md#capture-conversation).

## Long-term memory organization
<a name="long-term-memory-organization"></a>

When you [create](memory-create-a-memory-store.md) or update an AgentCore Memory, you can optionally create one or more [memory strategies](memory-strategies.md) . Within a strategy, use a namespace to specify AgentCore Memory organizes [long-term memories](memory-types.md#memory-long-term-memory).

Every time AgentCore Memory extracts a new long-term memory with a memory strategy, the long-term memory is saved under the namespace you set. This means that all long-term memories are scoped to their specific namespace, keeping them organized and preventing any conflicts with other users or sessions. You should use a hierarchical format separated by forward slashes `/` , ending with a trailing slash. The trailing slash prevents prefix collisions in multi-tenant applications—for example, use `/actors/Alice/` instead of `/actors/Alice` . As needed, you can use the following pre-defined variables within braces in the namespace based on your application’s organization needs:
+  **actorId** – Identifies who the long-term memory belongs to.
+  **strategyId** – Shows which memory strategy is being used.
+  **sessionId** – Identifies which session or conversation the memory is from.

For example, if you define the following namespace as the input to your strategy when creating an AgentCore Memory:

```
/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/
```

After memory creation, this namespace might look like:

```
/strategy/summarization-93483043/actor/actor-9830m2w3/session/session-9330sds8/
```

A namespace can have different levels of granularity:

 **Most granular Level of organization** 

 `/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/` 

 **Granular at the actor Level across sessions** 

 `/strategy/{memoryStrategyId}/actor/{actorId}/` 

 **Granular at the strategy Level across actors** 

 `/strategy/{memoryStrategyId}/` 

 **Global across all strategies** 

 `/` 

For example code, see [Enable long-term memory](long-term-enabling-long-term-memory.md).

### Restrict access with IAM
<a name="memory-scope-iam"></a>

You can create IAM policies to restrict memory access by the scopes you define, such as actor, session, and namespace. Use the scopes as context keys in your IAM polices.

The following policy restricts access to retrieving memories to a specific namespace prefix. In this example, the policy allows access only to memories in namespaces starting with `summaries/agent1/` , such as `summaries/agent1/session1/` or `summaries/agent1/session2/`.

```
{
"Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "SpecificNamespaceAccess",
      "Effect": "Allow",
      "Action": [
        "bedrock-agentcore:RetrieveMemoryRecords"
      ],
      "Resource": "arn:aws:bedrock-agentcore:us-east-1:123456789012:memory/memory_id",
      "Condition": {
        "StringLike": {
          "bedrock-agentcore:namespace": "summaries/agent1/"
        }
      }
    }
  ]
}
```

# Memory record streaming
<a name="memory-record-streaming"></a>

Memory record streaming in Amazon Bedrock AgentCore Memory delivers real-time notifications when memory records are created, updated, or deleted. Instead of polling APIs to detect changes, you receive push-based events to a Kinesis Data Stream in your account, enabling event-driven architectures that react to memory record lifecycle changes as they occur.

With memory record streaming, you can:
+ Receive real-time events for memory record creation, updates, and deletion
+ Build event-driven architectures without polling APIs
+ Stream memory record data into data lakes for consolidation and profile management
+ Trigger downstream workflows when new insights are extracted
+ Track memory record state changes across agents and sessions

**Topics**
+ [How it works](#memory-record-streaming-how-it-works)
+ [Stream event types](#memory-record-streaming-event-types)
+ [Event schema](#memory-record-streaming-event-schema)
+ [Prerequisites](#memory-record-streaming-prerequisites)
+ [Set up streaming](#memory-record-streaming-setup)
+ [Configure event content level](#memory-record-streaming-content-level)
+ [Test your implementation](#memory-record-streaming-test)
+ [Manage streaming configuration](#memory-record-streaming-manage)
+ [Observability](#memory-record-streaming-observability)

## How it works
<a name="memory-record-streaming-how-it-works"></a>

Memory record streaming uses a push-based delivery model. When memory records change, events are automatically published to your Kinesis Data Stream.

Events are triggered by the following operations:

1.  **Creation** – Asynchronous extraction from short-term memory events (via `CreateEvent` and memory strategies), or direct creation via `BatchCreateMemoryRecords` API

1.  **Updates** – Direct modification via `BatchUpdateMemoryRecords` API

1.  **Deletion** – Consolidation workflows (de-duplication/superseding), `DeleteMemoryRecord` API, or `BatchDeleteMemoryRecords` API

## Stream event types
<a name="memory-record-streaming-event-types"></a>

The following table describes the supported stream event types and when they are triggered.


| Operation | Stream event type | Triggered by | 
| --- | --- | --- | 
|  Create  |   `MemoryRecordCreated`   |  Long term memory extraction/consolidation, `BatchCreateMemoryRecords` API  | 
|  Update  |   `MemoryRecordUpdated`   |   `BatchUpdateMemoryRecords` API  | 
|  Delete  |   `MemoryRecordDeleted`   |   `BatchDeleteMemoryRecords` , `DeleteMemoryRecord` API, long term memory consolidation  | 

## Event schema
<a name="memory-record-streaming-event-schema"></a>

### MemoryRecordCreated / MemoryRecordUpdated
<a name="memory-record-streaming-created-updated-schema"></a>

 `MemoryRecordCreated` and `MemoryRecordUpdated` events share the same schema.

```
{
  "memoryStreamEvent": {
    "eventType": "<MemoryRecordCreated, MemoryRecordUpdated>",
    "eventTime": "2026-03-06T16:45:00.000Z",
    "memoryId": "<memory-id>",
    "memoryRecordId": "<memory-record-id>",
    "namespaces": ["<namespace>"],
    "createdAt": 1736622300000,
    "memoryStrategyId": "<memory-strategy-id>",
    "memoryStrategyType": "<memory-strategy-type>",
    "metadata": {<metadata>},
    "memoryRecordText": "<memory-record-text>"
  }
}
```

The `memoryRecordText` field is only included when the content level on the stream delivery configuration is set to `FULL_CONTENT` . See [Configure event content level](#memory-record-streaming-content-level) for additional details.

### MemoryRecordDeleted
<a name="memory-record-streaming-deleted-schema"></a>

```
{
  "memoryStreamEvent": {
    "eventType": "MemoryRecordDeleted",
    "eventTime": "2026-02-16T00:13:54.912530116Z",
    "memoryId": "<memory-id>",
    "memoryRecordId": "<memory-record-id>"
  }
}
```

Deletion events contain only the memory and record identifiers, regardless of the configured content level.

## Prerequisites
<a name="memory-record-streaming-prerequisites"></a>

Before setting up memory record streaming, verify you have:
+ An AWS account with appropriate permissions
+ Amazon Bedrock AgentCore access
+ Basic understanding of AWS IAM and Amazon Kinesis Data Streams

## Set up streaming
<a name="memory-record-streaming-setup"></a>

### Step 1: Create a Kinesis Data Stream
<a name="memory-record-streaming-setup-step1"></a>

Create a Kinesis Data Stream in your account where Amazon Bedrock AgentCore will publish memory record lifecycle events.

You can create the stream using the AWS Console, CDK, CloudFormation, or the AWS CLI. If you enable Kinesis server-side encryption, note the KMS key ARN — you’ll need it for the IAM role permissions.

### Step 2: Set up a consumer
<a name="memory-record-streaming-setup-step2"></a>

Set up a consumer to process events from your Kinesis Data Stream.

Grant your consumer `AmazonKinesisReadOnlyAccess` (or equivalent permissions) and add the Kinesis Data Stream as a trigger.

### Step 3: Create an IAM role
<a name="memory-record-streaming-setup-step3"></a>

Create an IAM role that Amazon Bedrock AgentCore can assume to publish events to your Kinesis Data Stream.

Trust policy:

```
{
"Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "bedrock-agentcore.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

Permissions policy:

For built-in memory strategies, the permissions policy looks like the following:

```
{
"Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kinesis:PutRecords",
        "kinesis:DescribeStream"
      ],
      "Resource": "arn:aws:kinesis:<region>:<account-id>:stream/<stream-name>"
    }
  ]
}
```

For custom memory strategies, the permissions policy looks like the following:

```
{
"Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kinesis:PutRecords",
        "kinesis:DescribeStream"
      ],
      "Resource": "arn:aws:kinesis:<region>:<account-id>:stream/<stream-name>"
    },
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream"
      ],
      "Resource": [
        "arn:aws:bedrock:*::foundation-model/*",
        "arn:aws:bedrock:*:*:inference-profile/*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:ResourceAccount": "${aws:PrincipalAccount}"
        }
      }
    }
  ]
}
```

If your Kinesis Data Stream uses server-side encryption, add the following to the permissions policy:

```
{
  "Sid": "KMSPermissions",
  "Effect": "Allow",
  "Action": "kms:GenerateDataKey",
  "Resource": "arn:aws:kms:<region>:<account-id>:key/your-kinesis-data-stream-key-id"
}
```

### Step 4: Create a memory with streaming enabled
<a name="memory-record-streaming-setup-step4"></a>

Use the `CreateMemory` API to create an Amazon Bedrock AgentCore Memory with a stream delivery resource. You must provide the `memoryExecutionRoleArn` when specifying a stream delivery resource.

```
aws bedrock-agentcore-control create-memory \
  --name "MyStreamingMemory" \
  --description "Memory with long term memory record streaming enabled" \
  --event-expiry-duration 30 \
  --memory-execution-role-arn "arn:aws:iam::<account-id>:role/AgentCoreMemoryRole" \
  --stream-delivery-resources '{
    "resources": [
      {
        "kinesis": {
          "dataStreamArn": "arn:aws:kinesis:<region>:<account-id>:stream/<stream-name>",
          "contentConfigurations": [
            {
              "type": "MEMORY_RECORDS",
              "level": "FULL_CONTENT"
            }
          ]
        }
      }
    ]
  }'
```

### Step 5: Verify your streaming integration
<a name="memory-record-streaming-setup-step5"></a>

When you create a memory with streaming enabled, Amazon Bedrock AgentCore Memory validates the configuration and permissions. Upon successful validation, a `StreamingEnabled` event is published to your Kinesis Data Stream.

Check your consumer for a validation event in the following format:

```
{
  "memoryStreamEvent": {
    "eventType": "StreamingEnabled",
    "eventTime": "2026-03-03T19:27:08.344082626Z",
    "memoryId": "<memory-id>",
    "message": "Streaming enabled for memory resource: <memory-id>"
  }
}
```

## Configure event content level
<a name="memory-record-streaming-content-level"></a>

The `contentConfigurations` field controls what data is included in each event. You can choose between two content levels:
+  **METADATA\$1ONLY** : Stream events only include metadata fields ( `memoryId` , `memoryRecordId` , `namespaces` , `strategyId` , timestamps, and so on). Requires an API call to retrieve the full memory record content.
+  **FULL\$1CONTENT** : Stream events include all metadata fields plus the `memoryRecordText` field containing the memory record content.

Use `METADATA_ONLY` for lightweight event notifications where you only need to know that a change occurred. Use `FULL_CONTENT` when your downstream processing needs the memory record text without making additional API calls.

## Test your implementation
<a name="memory-record-streaming-test"></a>

### Step 1: Create test events
<a name="memory-record-streaming-test-step1"></a>

Use the Data Plane APIs to generate memory record lifecycle events and verify they appear in your consumer.

Create events via short-term memory (triggers asynchronous extraction):

```
aws bedrock-agentcore create-event \
  --memory-id "<memory-id>" \
  --actor-id "test-user" \
  --session-id "test-session-1" \
  --event-timestamp "$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")" \
  --payload '[
    {
      "conversational": {
        "content": {"text": "I prefer Italian restaurants with outdoor seating"},
        "role": "USER"
      }
    },
    {
      "conversational": {
        "content": {"text": "Noted! I will remember your preference for Italian restaurants with outdoor seating."},
        "role": "ASSISTANT"
      }
    }
  ]'
```

Create records directly:

```
aws bedrock-agentcore batch-create-memory-records \
  --memory-id "<memory-id>" \
  --records '[
    {
      "requestIdentifier": "test-1",
      "content": {"text": "User prefers window seats on flights"},
      "namespaces": ["travel/test-user"],
      "timestamp": "1729525989"
    }
  ]'
```

### Step 2: Verify delivery
<a name="memory-record-streaming-test-step2"></a>

Check your consumer to confirm events are being received. You should see `MemoryRecordCreated` events for records created through either method.

You can also monitor delivery health using [Metrics](#memory-record-streaming-metrics) and [Logs](#memory-record-streaming-logs).

You can use the `ListMemoryRecords` API to cross-reference:

```
aws bedrock-agentcore list-memory-records \
  --memory-id "<memory-id>" \
  --namespace "<namespace>"
```

## Manage streaming configuration
<a name="memory-record-streaming-manage"></a>

### Update streaming configuration
<a name="memory-record-streaming-update"></a>

Use the `UpdateMemory` API to modify or remove the stream delivery resource.

Remove streaming:

```
aws bedrock-agentcore-control update-memory \
  --region us-east-1 \
  --memory-id "<memory-id>" \
  --stream-delivery-resources '{"resources": []}'
```

### Change content level
<a name="memory-record-streaming-change-content-level"></a>

```
aws bedrock-agentcore-control update-memory \
  --memory-id "<memory-id>" \
  --stream-delivery-resources '{
    "resources": [
      {
        "kinesis": {
          "dataStreamArn": "arn:aws:kinesis:us-east-1:<account-id>:stream/<stream-name>",
          "contentConfigurations": [
            {
              "type": "MEMORY_RECORDS",
              "level": "METADATA_ONLY"
            }
          ]
        }
      }
    ]
  }'
```

## Observability
<a name="memory-record-streaming-observability"></a>

Amazon Bedrock AgentCore Memory vends CloudWatch metrics and logs to your AWS account, giving you visibility into the health and status of memory record stream delivery.

### Metrics
<a name="memory-record-streaming-metrics"></a>

Metrics are published to your account under the `AWS/Bedrock-AgentCore` namespace.


| Metric | Description | 
| --- | --- | 
|   `StreamPublishingSuccess`   |  The number of memory record events successfully published to your Kinesis Data Stream.  | 
|   `StreamPublishingFailure`   |  The number of memory record events that failed to publish to your Kinesis Data Stream.  | 
|   `StreamUserError`   |  The number of events that failed due to customer-side configuration issues, such as missing IAM permissions or an invalid KMS key state.  | 

All metrics are emitted as `Count` units with the following dimensions:


| Dimension | Value | Description | 
| --- | --- | --- | 
|  Operation  |   `MemoryStreamEvent`   |  The streaming operation type.  | 
|  Resource  |  Memory ARN  |  The ARN of the memory resource (for example, `arn:aws:bedrock-agentcore:us-east-1:123456789012:memory/memory-123` ).  | 

### Logs
<a name="memory-record-streaming-logs"></a>

Amazon Bedrock AgentCore Memory vends logs to your account when terminal (non-retryable) publishing failures occur.


| Field | Description | 
| --- | --- | 
|   `log`   |  Error message describing the failure.  | 
|   `streamArn`   |  The target Kinesis Data Stream ARN.  | 
|   `errorCode`   |  The specific error code.  | 
|   `errorMessage`   |  A human-readable description of the error.  | 
|   `eventType`   |  The stream event type ( `MemoryRecordCreated` , `MemoryRecordUpdated` , or `MemoryRecordDeleted` ).  | 
|   `memoryRecordId`   |  The identifier of the affected memory record.  | 

# Compare long-term memory with Retrieval-Augmented Generation
<a name="memory-ltm-rag"></a>

 [Long-term memory](memory-types.md#memory-long-term-memory) in Amazon Bedrock AgentCore Memory serves as persistent storage for session-specific context, enabling agents to maintain continuity and personalization across interactions. Use long-term memory to store user preferences, past decisions, conversation history, and behavioral patterns that help agents adapt and feel personal without repeatedly requesting the same information. This memory type is ideal for tracking who the user is, what has happened in previous sessions, and maintaining state across multi-step workflows.

Retrieval-Augmented Generation (RAG) complements long-term memory by providing access to authoritative, current information from large-scale repositories. Use these system to retrieve up-to-date documentation, technical specifications, policies, and domain expertise that may be too large or volatile for long-term storage. RAG ensures factual accuracy by pulling directly from curated sources at query time, making it ideal for accessing what authoritative sources say right now. One option for integrating RAG into your agent is to use an Amazon Bedrock Knowledge Base. For more information, see [Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html) in the *Amazon Bedrock user guide*.

The key distinction lies in their complementary roles: long-term memory handles personal context and session continuity, while RAG provides current factual knowledge and domain expertise. Long-term memory answers "who is the user and what happened before," while RAG answers "what do trusted sources say currently." This separation allows agents to maintain personal relationships with users while ensuring access to accurate, authoritative information.

By using long-memory and RAG together, your agent can deliver both personalized experiences through remembered context and reliable information through real- time knowledge retrieval. To your customers, your agents are both familiar and factually grounded.

# Get started with AgentCore Memory
<a name="memory-get-started"></a>

Amazon Bedrock Amazon Bedrock AgentCore Memory lets you create and manage AgentCore Memory resources that store conversation context for your AI agents. This getting started guides you through installing dependencies and implementing both short-term and long-term memory features. The instructions use the AgentCore CLI.

The steps are as follows:

1. Create an AgentCore Memory containing a semantic strategy

1. Write events (conversation history) to the memory resource

1. Retrieve memory records from long term memory

For other examples, see [Amazon Bedrock AgentCore Memory examples](memory-examples.md).

## Prerequisites
<a name="memory-prerequisites"></a>

Before starting, make sure you have:
+  ** AWS Account** with credentials configured ( `aws configure` )
+  **Python 3.10\$1** installed
+ Node.js 18\$1 installed (for the AgentCore CLI)

To get started with Amazon Bedrock Amazon Bedrock AgentCore Memory, install the dependencies, create an AgentCore CLI project, and set up a virtual environment. The below commands can be run directly in the terminal.

```
pip install bedrock-agentcore
npm install -g @aws/agentcore
agentcore create --name agentcore-memory-quickstart --no-agent
cd agentcore-memory-quickstart
python -m venv .venv
source .venv/bin/activate
```

The AgentCore CLI provides commands for creating and managing memory resources. Use `agentcore add memory` to create a memory, and `agentcore deploy` to provision it in AWS. For event operations and session management, use the AWS Python SDK (Boto3) ( `bedrock-agentcore` ).

**Note**  
The AgentCore CLI helps you create and deploy memory resources. For the complete set of Amazon Bedrock AgentCore Memory operations, see the Boto3 documentation: [bedrock-agentcore-control](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore-control.html) and [bedrock-agentcore](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore.html).

 **Full example:** See the [Amazon Bedrock AgentCore samples](https://github.com/awslabs/amazon-bedrock-agentcore-samples/tree/main/01-tutorials) that demonstrate steps 1-3.

## Step 1: Create an AgentCore Memory
<a name="memory-create-resource"></a>

You need an AgentCore Memory to start storing information for your agent. By default, memory events (which we refer to as short-term memory) can be written to an AgentCore Memory. For insights to be extracted and placed into long term memory records, the resource requires a *memory strategy* which is a configuration that defines how conversational data should be processed, and what information to extract (such as facts, preferences, or summaries).

In this step, you create an AgentCore Memory with a semantic strategy so that both short term and long term memory can be utilized. This will take 2-3 minutes. You can also create AgentCore Memory resources in the AWS console.

Create memory with semantic strategy:

**Example**  

1. 

   ```
   agentcore add memory --name CustomerSupportSemantic --strategies SEMANTIC
   agentcore deploy
   ```

1. Run `agentcore` to open the TUI, then select **add** and choose **Memory** :

1. Enter the memory name:  
![\[Memory wizard: enter name\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-name.png)

1. Select the **Semantic** strategy, then confirm:  
![\[Memory wizard: select SEMANTIC strategy\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-strategies.png)

   Then run `agentcore deploy` to provision the memory in AWS.

After deployment, verify the memory was created:

```
agentcore status
```

## Step 2: Write events to memory
<a name="memory-write-events"></a>

You can write events to an AgentCore Memory as short-term memory and extracts insights for long-term memory.

Writing events to memory has multiple purposes. First, event contents (most commonly conversation history) are stored as short-term memory. Second, relevant insights are pulled from events and written into memory records as a part of long-term memory.

The memory resource id, actor id, and session id are required to create an event. In this step, you create three events, simulating messages between an end user and a chat bot.

```
import boto3
from bedrock_agentcore.memory import MemorySessionManager
from bedrock_agentcore.memory.constants import ConversationalMessage, MessageRole

control_client = boto3.client('bedrock-agentcore-control', region_name='us-west-2')

# Retrieve the memory created in Step 1
response = control_client.list_memories()
memory = response['memories'][0]
memory_id = memory['id']

# Create a session to store memory events
session_manager = MemorySessionManager(
    memory_id=memory_id,
    region_name="us-west-2")

session = session_manager.create_memory_session(
    actor_id="User1",
    session_id="OrderSupportSession1"
)

# Write memory events (conversation turns)
session.add_turns(
    messages=[
        ConversationalMessage(
            "Hi, how can I help you today?",
            MessageRole.ASSISTANT)],
)

session.add_turns(
    messages=[
        ConversationalMessage(
            "Hi, I am a new customer. I just made an order, but it hasn't arrived. The Order number is #35476",
            MessageRole.USER)],
)

session.add_turns(
    messages=[
        ConversationalMessage(
            "I'm sorry to hear that. Let me look up your order.",
            MessageRole.ASSISTANT)],
)
```

You can get events (turns) for a specific actor after they’ve been written.

```
# Get the last k turns in the session
turns = session.get_last_k_turns(k=5)

for turn in turns:
    print(f"Turn: {turn}")
```

In this case, you can see the last three events for the actor and session.

## Step 3: Retrieve records from long term memory
<a name="memory-retrieve-records"></a>

After the events were written to the memory resource, they were analyzed and useful information was sent to long term memory. Since the memory contains a semantic long term memory strategy, the system extracts and stores factual information.

You can list all memory records with:

```
# List all memory records
memory_records = session.list_long_term_memory_records(
    namespace_prefix="/"
)

for record in memory_records:
    print(f"Memory record: {record}")
    print("--------------------------------------------------------------------")
```

Or ask for the most relevant information as part of a semantic search:

```
# Perform a semantic search
memory_records = session.search_long_term_memories(
    query="can you summarize the support issue",
    namespace_prefix="/",
    top_k=3
)
```

Important information about the user is likely stored is long term memory. Agents can use long term memory rather than a full conversation history to make sure that LLMs are not overloaded with context.

## Adding memory to an existing agent
<a name="memory-add-to-existing-agent"></a>

If you created a Strands agent without memory and want to add it later, follow these steps:

1. Add a memory resource to your project:

   ```
   agentcore add memory --name MyMemory --strategies SEMANTIC,SUMMARIZATION
   agentcore deploy
   ```

1. Create the `memory/` directory in your agent:

   ```
   mkdir -p app/MyAgent/memory
   ```

1. Create `app/MyAgent/memory/session.py` :

   ```
   import os
   from typing import Optional
   from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig, RetrievalConfig
   from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager
   
   MEMORY_ID = os.getenv("MEMORY_MYMEMORY_ID")
   REGION = os.getenv("AWS_REGION")
   
   def get_memory_session_manager(session_id: str, actor_id: str) -> Optional[AgentCoreMemorySessionManager]:
       if not MEMORY_ID:
           return None
   
       retrieval_config = {
           f"/users/{actor_id}/facts": RetrievalConfig(top_k=3, relevance_score=0.5),
           f"/summaries/{actor_id}/{session_id}": RetrievalConfig(top_k=3, relevance_score=0.5)
       }
   
       return AgentCoreMemorySessionManager(
           AgentCoreMemoryConfig(
               memory_id=MEMORY_ID,
               session_id=session_id,
               actor_id=actor_id,
               retrieval_config=retrieval_config,
           ),
           REGION
       )
   ```

1. Update your `main.py` to use the session manager:

   ```
   from memory.session import get_memory_session_manager
   
   @app.entrypoint
   async def invoke(payload, context):
       session_id = getattr(context, 'session_id', 'default-session')
       user_id = getattr(context, 'user_id', 'default-user')
   
       agent = Agent(
           model=load_model(),
           session_manager=get_memory_session_manager(session_id, user_id),
           system_prompt="You are a helpful assistant.",
       )
   
       response = agent(payload.get("prompt"))
       return response
   ```

1. Deploy the updated project:

   ```
   agentcore deploy
   ```

**Note**  
Each memory resource gets an environment variable `MEMORY_<NAME>_ID` (uppercase, with underscores) that is automatically available in your agent’s runtime environment after deployment.

## Cleanup
<a name="memory-cleanup"></a>

When you’re done with the memory resource, you can delete it:

```
agentcore remove memory --name CustomerSupportSemantic
agentcore deploy
```

## Next steps
<a name="memory-next-steps"></a>

Consider the following:
+  [Add another strategy](long-term-enabling-long-term-memory.md#long-term-adding-strategies-to-existing-memory) to your memory resource.
+  [Enable observability](memory-observability.md) for more visibility into how memory is working
+ Look at further [examples](memory-examples.md).

# Create an AgentCore Memory
<a name="memory-create-a-memory-store"></a>

You can create an AgentCore Memory with the AgentCore CLI, the AWS console, or with the [CreateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CreateMemory.html) AWS SDK operation. When creating a memory, you can configure settings such as name, description, encryption settings, expiration timestamp for raw events, and memory strategies if you want to extract long-term memory.

When creating an AgentCore Memory, consider the following factors to maintain it meets your application’s needs:

 **Event retention** – Choose how long raw events are retained (up to 365 days) for short-term memory.

 **Security requirements** – If your application handles sensitive information, consider using a [customer managed AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) for encryption. The service still encrypts data using a service managed key, even if you don’t provide a customer-managed AWS KMS key.

 **Memory strategies** – Define how events are processed into meaningful long-term memories using built-in or built-in with overrides strategies. If you do not define any strategy, only short-term memory containing raw events are stored. For more information, see [Use long-term memory](long-term-memory-long-term.md).

 **Naming conventions** – Use clear, descriptive names that help identify the purpose of each AgentCore Memory, especially if your application uses multiple stores.

**Example**  

1. The AgentCore CLI memory commands must be run inside an existing agentcore project. If you don’t have one yet, create a project first:

   ```
   agentcore create --name my-agent --no-agent
   cd my-agent
   ```

    **Create** a basic memory (short-term only):

   ```
   agentcore add memory --name my_agent_memory
   agentcore deploy
   ```

    **Create** memory with long-term strategies:

   ```
   agentcore add memory --name ShoppingSupportAgentMemory \
     --strategies SUMMARIZATION,USER_PREFERENCE
   agentcore deploy
   ```

    **Check** memory status:

   ```
   agentcore status
   ```

1. Run `agentcore` to open the TUI, then select **add** and choose **Memory** :

1. Enter the memory name:  
![\[Memory wizard: enter name\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-name.png)

1. Select the event expiry duration:  
![\[Memory wizard: select event expiry duration\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-expiry.png)

1. Choose memory strategies for long-term memory extraction:  
![\[Memory wizard: select memory strategies\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-strategies.png)

1. Review the configuration and press Enter to confirm:  
![\[Memory wizard: review configuration\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-confirm.png)

1. For more information, see [AWS SDK](aws-sdk-memory.md).

   ```
   import boto3
   import time
   
   # Initialize the Boto3 clients for control plane and data plane operations
   control_client = boto3.client('bedrock-agentcore-control')
   data_client = boto3.client('bedrock-agentcore')
   
   print("Creating a new memory resource...")
   
   # Create the memory resource with defined strategies
   response = control_client.create_memory(
       name="ShoppingSupportAgentMemory",
       description="Memory for a customer support agent.",
       memoryStrategies=[
           {
               'summaryMemoryStrategy': {
                   'name': 'SessionSummarizer',
                   'namespaceTemplates': ['/summaries/{actorId}/{sessionId}/']
               }
           },
           {
               'userPreferenceMemoryStrategy': {
                   'name': 'UserPreferenceExtractor',
                   'namespaceTemplates': ['/users/{actorId}/preferences/']
               }
           }
       ]
   )
   
   memory_id = response['memory']['id']
   print(f"Memory resource created with ID: {memory_id}")
   
   # Poll the memory status until it becomes ACTIVE
   while True:
       mem_status_response = control_client.get_memory(memoryId=memory_id)
       status = mem_status_response.get('memory', {}).get('status')
       if status == 'ACTIVE':
           print("Memory resource is now ACTIVE.")
           break
       elif status == 'FAILED':
           raise Exception("Memory resource creation FAILED.")
       print("Waiting for memory to become active...")
       time.sleep(10)
   ```

1. ====== To create an AgentCore memory

1. Open the [Amazon Bedrock AgentCore](https://console.aws.amazon.com/bedrock-agentcore/) console.

1. In the left navigation pane, choose **Memory**.

1. Choose **Create memory**.

1. For **Memory name** enter a name for the AgentCore Memory.

1. (Optional) For **Short-term memory (raw event) expiration** set the duration (days), for which the AgentCore Memory will store events.

1. (Optional) In **Additional configurations** set the following:

   1. For **Memory description** , enter a description for the AgentCore Memory.

   1. If you want to use your own AWS KMS key to encrypt your data, do the following:

      1. In **KMS key** , choose **Customize encryption settings (advanced)**.

      1. In **Choose an AWS KMS key** choose or enter the ARN of an existing AWS KMS key. Alternatively, choose **Create an AWS KMS** to create a new AWS KMS key.

1. (Optional) For **Long-term memory extraction strategies** choose one or more [memory strategies](memory-strategies.md) . For more information:
   +  [Built-in strategies](built-in-strategies.md) 
   +  [Customize a built-in strategy or create your own strategy](memory-custom-strategy.md) 
   +  [Self-managed strategy](memory-self-managed-strategies.md) 

1. Choose **Create memory** to create the AgentCore Memory.

**Topics**
+ [Encrypt your Amazon Bedrock AgentCore Memory](storage-encryption.md)

# Encrypt your Amazon Bedrock AgentCore Memory
<a name="storage-encryption"></a>

When creating an AgentCore Memory, it is important to make sure your data is safe and secure. If your application handles sensitive information (such as customer details, payment data, or personal chats), you must use encryption to protect this data. Consider using a customer-managed KMS key (CMK) for encryption. The service still encrypts data using a service managed key, even if you don’t provide a CMK. Alternatively, you can also use an AWS-managed KMS key. In this case, you need to the add the following policy to the IAM user or role that you are using to setup memory.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAgentCoreMemoryKMS",
      "Effect": "Allow",
      "Action": [
        "kms:CreateGrant",
        "kms:Decrypt",
        "kms:DescribeKey",
        "kms:GenerateDataKey",
        "kms:GenerateDataKeyWithoutPlaintext",
        "kms:ReEncrypt*"
      ],
      "Resource": "arn:aws:kms:*:111122223333:key/*",
      "Condition": {
        "StringEquals": {
          "kms:ViaService": "bedrock-agentcore.us-east-1.amazonaws.com"
        }
      }
    }
  ]
}
```

Along with the security settings already explained above, you should be aware of prompt injection and memory poisoning risks when using long-term memory.
+ Prompt injection is an application-level security concern, similar to SQL injection in database applications. Just as AWS services like Amazon RDS and Amazon Aurora provide secure database engines, but customers are responsible for preventing SQL injection in their applications. Amazon Bedrock provides a secure foundation for natural language processing, but customers must take measures to prevent prompt injection vulnerabilities in their code. Additionally, AWS provides detailed documentation, best practices, and guidance on secure coding practices for Bedrock and other AWS services.
+ Memory poisoning happens when false or harmful information is saved in AgentCore Memory. Later, your AI agent may use this wrong information in future conversations, which can lead to incorrect or unsafe responses

As per the [AWS Shared Responsibility Model](https://aws.amazon.com/compliance/shared-responsibility-model/) , AWS is responsible for securing the underlying cloud infrastructure, including the hardware, software, networking, and facilities that run AWS services. However, the responsibility for secure application development and preventing vulnerabilities like prompt injection and memory poisoning lies with the customer.

To reduce risk, you can do the following:
+  **Amazon Bedrock Guardrails** : Use Amazon Bedrock Guardrails to check prompts being sent to or from AgentCore Memory. This ensures that only safe and allowed prompts are processed by your agent.
+  **Adversarial testing** : Actively test your AI application for vulnerabilities by simulating attacks or prompt injections. This helps you find weak points and fix them before real threats occur.

# Use short-term memory
<a name="using-memory-short-term"></a>

In your AI agent, you write code to add events to [short-term memory](memory-types.md#short-term-memory) in an AgentCore Memory. These events are stored as short-term memory. They form the foundation for structured information extraction into long-term memory.

The following section discusses short-term memory with the AWS SDK. For examples that use the [Amazon Bedrock AgentCore samples repository](https://github.com/awslabs/amazon-bedrock-agentcore-samples) and the AWS SDK, see [Scenario: A customer support AI agent using AgentCore Memory](memory-customer-scenario.md) . For other SDKs see [Amazon Bedrock AgentCore Memory examples](memory-examples.md).

**Topics**
+ [Create an event](short-term-create-event.md)
+ [Get an event](short-term-get-event.md)
+ [List events](short-term-list-events.md)
+ [Delete an event](short-term-delete-event.md)

# Create an event
<a name="short-term-create-event"></a>

Events are the fundamental units of short-term from which structured informations are extracted into long-term memory in AgentCore Memory. The [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html) operation lets you store various types of data within AgentCore Memory, organized by an actor and session. Events are scoped within memory under:

 **ActorId**   
Identifies the entity associated with the event, such as end-users or agent/user combinations

 **SessionId**   
Groups related events together, such as a conversation session

The `CreateEvent` operation stores a new immutable event within a specified memory session. Events represent individual pieces of information that your agent wants to remember, such as conversation messages, user actions, or system events.

This operation is useful for:
+ Recording conversation history between users, agents and tools
+ Storing user interactions and behaviors
+ Capturing system events and state changes
+ Building a chronological record of activities within a session

For example code, see [Scenario: A customer support AI agent using AgentCore Memory](memory-customer-scenario.md).

## Event payload types
<a name="event-payload-types"></a>

The `payload` parameter accepts a list of payload items, letting you store different types of data in a single event. Common payload types include:

 **Conversational**   
For storing conversation messages with roles (for example, "user" or "assistant") and content.

 **Blob**   
For storing binary format data, such as images and documents, or data that is unique to your agent, such as data stored in JSON format.

**Note**  
Currently, only conversational data flows into long-term memory.

## Event branching
<a name="short-term-event-branching"></a>

The `branch` parameter lets you organize events through advanced branching. This is useful for scenarios like message editing or alternative conversation paths. For example, suppose you have a long-running conversation, and you realize you’re interested in exploring an alternative conversation starting from 5 messages ago. You can use the `branch` parameter to start a new conversation from that message, stored in the new branch — which lets you also return to the original conversation. And more mundanely, this is useful if you want to let your user edit their most recent message (in case the user presses enter early or has a typo) and continue the conversation.

When creating a branch, you specify:

 **name**   
A descriptive name for the branch, such as "edited-conversation".

 **rootEventId**   
The ID of the event from which the branch originates.

Here’s an example of creating a branched event to represent an edited message:

```
{
  "memoryId": "mem-12345abcdef",
  "actorId": "/agent-support-123/customer-456",
  "sessionId": "session-789",
  "eventTimestamp": 1718806000000,
  "payload": [
    {
      "Conversational": {
        "content": "I'm looking for a waterproof action camera for extreme sports.",
        "role": "user"
      }
    }
  ],
  "branch": {
    "name": "edited-conversation",
    "rootEventId": "evt-67890"
  }
}
```

# Get an event
<a name="short-term-get-event"></a>

The [GetEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_GetEvent.html) API retrieves a specific raw event by its identifier from short-term memory in AgentCore Memory. This API requires you to specify the `memoryId` , `actorId` , `sessionId` , and `eventId` as path parameters in the request URL.

```
import boto3

data_client = boto3.client('bedrock-agentcore')

response = data_client.get_event(
    memoryId="your-memory-id",
    actorId="your-actor-id",
    sessionId="your-session-id",
    eventId="your-event-id"
)

event = response['event']
print(f"Event ID: {event['eventId']}")
print(f"Timestamp: {event['eventTimestamp']}")
print(f"Payload: {event['payload']}")
```

# List events
<a name="short-term-list-events"></a>

The [ListEvents](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_ListEvents.html) operation is valuable for applications that need to reconstruct conversation histories, analyze interaction patterns, or implement memory-based features like conversation summarization and context awareness.

The `ListEvents` operation is a read-only operation that lists events from a specified session in AgentCore Memory instance. This paginated operation requires you to specify the `memoryId` , `actorId` , and `sessionId` as path parameters, and supports optional filtering through the `filter` parameter in the request body, letting you efficiently retrieve relevant events from your memory sessions. You can control whether payloads are included in the response using the `includePayloads` parameter (default is true), and limit the number of results with `maxResults`.

# Delete an event
<a name="short-term-delete-event"></a>

The [DeleteEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_DeleteEvent.html) operation removes individual events from your AgentCore Memory. This operation helps maintain data privacy and relevance by letting you selectively remove specific events from a session while preserving the broader context and relationship structure within your application’s memory.

**Note**  
These are manual deletion operations, and do not overlap with automatic deletion of events based on the `eventExpiryDuration` parameter set at the time of [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html) operation. Also deleting an event doesn’t remove the structured information derived out of it from the long term memory. For more information, see [DeleteMemoryRecord](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_DeleteMemoryRecord.html).

# Use long-term memory
<a name="long-term-memory-long-term"></a>

Long-term memory is enabled by adding one or more memory strategies to a memory resource. These strategies define what kind of information is extracted from conversations and stored persistently. This section guides you through configuring built-in and built-in with overrides strategies to enable long-term memory for your agent.

This section provides examples using the AWS SDK (Boto3). For complete end-to-end examples, see [Amazon Bedrock AgentCore Memory examples](memory-examples.md).

**Topics**
+ [Enable long-term memory](long-term-enabling-long-term-memory.md)
+ [Specify long-term memory organization with namespaces](specify-long-term-memory-organization.md)
+ [Configure built-in strategies](long-term-configuring-built-in-strategies.md)
+ [Configure a custom strategy](long-term-configuring-custom-strategies.md)
+ [Save and retrieve insights](long-term-saving-and-retrieving-insights.md)
+ [Retrieve memory records](long-term-retrieve-records.md)
+ [List memory records](long-term-list-memory-records.md)
+ [Delete memory records](long-term-delete-memory-records.md)
+ [Redrive failed ingestions](long-term-redrive.md)

# Enable long-term memory
<a name="long-term-enabling-long-term-memory"></a>

You can enable long-term memory in two ways: by adding strategies when you first [create an AgentCore Memory](memory-create-a-memory-store.md) , or by updating an existing resource to include them.

## Creating a new memory with long-term strategies
<a name="long-term-creating-new-memory-with-strategies"></a>

The most direct method is to include strategies when you create a new AgentCore Memory. After calling `create_memory` , you must wait for the AgentCore Memory status to become `ACTIVE` before you can use it.

**Example**  

1. 

   ```
   agentcore add memory --name PersonalizedShoppingAgentMemory --strategies USER_PREFERENCE
   agentcore deploy
   ```

1. Run `agentcore` to open the TUI, then select **add** and choose **Memory** :

1. Select the **User preference** strategy:  
![\[Memory wizard: select USER_PREFERENCE strategy\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-strategies.png)

1. Review the configuration and press Enter to confirm:  
![\[Memory wizard: confirm memory configuration\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-confirm.png)

   Then run `agentcore deploy` to provision the memory in AWS.

## Adding long-term strategies to an existing AgentCore Memory
<a name="long-term-adding-strategies-to-existing-memory"></a>

To add long-term capabilities to an existing AgentCore Memory, you use the `update_memory` operation. You can add, modify or delete strategies for an existing memory.

**Note**  
The AgentCore CLI supports creating and managing memory resources. To update memory strategies on an existing memory, use the AWS SDK.

 **Example Add a Session Summary strategy to an existing AgentCore Memory** 

```
import boto3

# Initialize the Boto3 client for control plane operations
control_client = boto3.client('bedrock-agentcore-control', region_name='us-west-2')

# Assume 'memory_id' is the ID of an existing AgentCore Memory that has no strategies attached to it
memory_id = "your-existing-memory-id"

# Update the memory to add a summary strategy
response = control_client.update_memory(
    memoryId=memory_id,
    memoryStrategies=[
        {
            'summaryMemoryStrategy': {
                'name': 'SessionSummarizer',
                'description': 'Summarizes conversation sessions for context',
                'namespaceTemplates': ['/summaries/{actorId}/{sessionId}/']
            }
        }
    ]
)

print(f"Successfully submitted update for memory ID: {memory_id}")

# Validate strategy was added to the memory
memory_response = control_client.get_memory(memoryId=memory_id)
strategies = memory_response.get('memory', {}).get('strategies', [])
print(f"Memory strategies for memoryID: {memory_id} are: {strategies}")
```

**Note**  
Long-term memory records will only be extracted from conversational events that are stored **after** a new strategy becomes `ACTIVE` . Conversations stored before a strategy is added will not be processed for long-term memory.

# Specify long-term memory organization with namespaces
<a name="specify-long-term-memory-organization"></a>

When you [create](memory-create-a-memory-store.md) an AgentCore Memory, use a namespace to specify where the [long-term memories](memory-types.md#memory-long-term-memory) for a [memory strategy](memory-strategies.md) are logically grouped. Every time a new long-term memory is extracted using the memory strategy, it is saved under the namespace you set. This means that all long-term memories are scoped to their specific namespace, keeping them organized and preventing any mix-ups with other users or sessions. You should use a hierarchical format separated by forward slashes `/` . This helps keep memories organized clearly. As needed, you can choose to use the following pre-defined variables within braces in the namespace based on your application’s organization needs:
+  **actorId** – Identifies who the long-term memory belongs to.

  An actor refers to entity such as end users or agent/user combinations. For example, in a coding support chatbot, the actor is usually the developer asking questions. Using the actor ID helps the system know which user the memory belongs to, keeping each user’s data separate and organized.
+  **strategyId** – Shows which memory strategy is being used. This strategy identifier is auto-generated when you create an AgentCore Memory.
+  **sessionId** – Identifies which session or conversation the memory is from.

  A session is usually a single conversation or interaction period between the user and the AI agent. It groups all related messages and events that happen during that conversation.

For example, if you define the following namespace as the input to your strategy when creating an AgentCore Memory:

```
/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/
```

After memory creation, this namespace might look like:

```
/strategy/summarization-93483043//actor/actor-9830m2w3/session/session-9330sds8
```

A namespace can have different levels of granularity:

 **Most granular Level of organization** 

 `/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/` 

 **Granular at the actor Level across sessions** 

 `/strategy/{memoryStrategyId}/actor/{actorId}/` 

 **Granular at the strategy Level across actors** 

 `/strategy/{memoryStrategyId}/` 

 **Global across all strategies** 

 `/` 

For example code, see [Enable long-term memory](long-term-enabling-long-term-memory.md).

## Restrict access with IAM
<a name="memory-scope-iam"></a>

You can create IAM policies to restrict memory access by the scopes you define, such as actor, session, and namespace. Use the scopes as context keys in your IAM policies.

The following policy restricts access to retrieving memories to a specific namespace prefix. In this example, the policy allows access only to memories in namespaces starting with `summaries/agent1/` , such as `summaries/agent1/session1/` or `summaries/agent1/session2/`.

```
{
"Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "SpecificNamespaceAccess",
      "Effect": "Allow",
      "Action": [
        "bedrock-agentcore:RetrieveMemoryRecords"
      ],
      "Resource": "arn:aws:bedrock-agentcore:us-east-1:123456789012:memory/memory_id",
      "Condition": {
        "StringLike": {
          "bedrock-agentcore:namespace": "summaries/agent1/"
        }
      }
    }
  ]
}
```

# Configure built-in strategies
<a name="long-term-configuring-built-in-strategies"></a>

AgentCore Memory provides pre-configured, [built-in memory strategies](built-in-strategies.md) for common use cases.

**Topics**
+ [User preferences](#long-term-user-preferences-strategy)
+ [Semantic](#long-term-semantic-facts-strategy)
+ [Session summaries](#long-term-session-summaries-strategy)
+ [Episodic](#long-term-session-episodic-strategy)

## User preferences
<a name="long-term-user-preferences-strategy"></a>

The [user preferences](user-preference-memory-strategy.md) ( `UserPreferenceMemoryStrategy` ) strategy is designed to automatically identify and extract user preferences, choices, and styles from conversations. This lets your agent build a persistent profile of each user, leading to more personalized and relevant interactions.
+  **Example use case:** An e-commerce agent remembers a user’s favorite brands and preferred size, letting it offer tailored product recommendations in future sessions.

 **Configuration example:** 

```
import boto3

# Initialize the Boto3 client for control plane operations
control_client = boto3.client('bedrock-agentcore-control', region_name='us-west-2')

# Create memory resource with user preference strategy
response = control_client.create_memory(
    name="ECommerceAgentMemory",
    memoryStrategies=[
        {
            'userPreferenceMemoryStrategy': {
                'name': 'UserPreferenceExtractor',
                'namespaceTemplates': ['/users/{actorId}/preferences/']
            }
        }
    ]
)
```

## Semantic
<a name="long-term-semantic-facts-strategy"></a>

The [Semantic](semantic-memory-strategy.md) ( `SemanticMemoryStrategy` ) memory strategy is engineered to identify and extract key pieces of factual information and contextual knowledge from conversational data. This lets your agent build a persistent knowledge base about important entities, events, and details discussed during an interaction.
+  **Example use case:** A customer support agent remembers that order \$1ABC-123 is related to a specific support ticket, so the user doesn’t have to provide the order number again when following up.

 **Configuration example:** 

```
import boto3

# Initialize the Boto3 client for control plane operations
control_client = boto3.client('bedrock-agentcore-control', region_name='us-west-2')

# Create memory resource with semantic strategy
response = control_client.create_memory(
    name="SupportAgentFactMemory",
    memoryStrategies=[
        {
            'semanticMemoryStrategy': {
                'name': 'FactExtractor',
                'namespaceTemplates': ['/support_cases/{sessionId}/facts/']
            }
        }
    ]
)
```

## Session summaries
<a name="long-term-session-summaries-strategy"></a>

The [session summaries](summary-strategy.md) ( `SummaryMemoryStrategy` ) memory strategy creates condensed, running summaries of conversations as they happen within a single session. This captures the key topics and decisions, letting an agent quickly recall the context of a long conversation without needing to re-process the entire history.
+  **Example use case:** After a 30-minute troubleshooting session, the agent can access a summary like, "User reported issue with software v2.1, attempted a restart, and was provided a link to the knowledge base article."

 **Configuration example:** 

```
import boto3

# Initialize the Boto3 client for control plane operations
control_client = boto3.client('bedrock-agentcore-control', region_name='us-west-2')

# Create memory resource with summary strategy
response = control_client.create_memory(
    name="TroubleshootingAgentSummaryMemory",
    memoryStrategies=[
        {
            'summaryMemoryStrategy': {
                'name': 'SessionSummarizer',
                'namespaceTemplates': ['/summaries/{actorId}/{sessionId}/']
            }
        }
    ]
)
```

## Episodic
<a name="long-term-session-episodic-strategy"></a>

The [episodic](episodic-memory-strategy.md) ( `EpisodicStrategy` ) memory strategy captures interactions as structured episodes consisting of scenarios, intents, thoughts, actions taken, outcomes, and artifacts. With this strategy, reflections are also made across episodes to extract broader insights, letting an agent learn and apply successful patterns from prior interactions to new interactions.
+  **Example use case:** A customer support agent logs interactions as episodes. The system captures which phrases and actions lead to successful interactions

 **Configuration example:** 

```
import boto3

# Initialize the Boto3 client for control plane operations
control_client = boto3.client('bedrock-agentcore-control', region_name='us-west-2')

# Create memory resource with episodic strategy
response = control_client.create_memory(
    name="MyMemory",
    memoryStrategies=[
        {
            'episodicMemoryStrategy': {
                'name': 'EpisodicStrategy',
                'namespaceTemplates': ['/strategy/{memoryStrategyId}/actors/{actorId}/sessions/{sessionId}/'],
                'reflection': {
                    'namespaceTemplates': ['strategy/{memoryStrategyId}/actors/{actorId}/']
                }
            }
        }
    ]
)
```

# Configure a custom strategy
<a name="long-term-configuring-custom-strategies"></a>

For advanced use cases, [built-in with overrides](memory-custom-strategy.md) strategies give you fine-grained control over the memory extraction process. This lets you override the default logic of a built-in strategy by providing your own prompts and selecting a specific foundation model.
+  **Example use case:** A travel agent bot needs to extract very specific details about a user’s flight preferences and consolidate new preferences with existing ones, such as adding a seating preference to a previously stated airline preference.

**Topics**
+ [Prerequisites](#long-term-creating-memory-prerequisites)
+ [Creating the memory execution role](#long-term-creating-memory-execution-role)
+ [Override a built-in strategy with the API](#long-term-custom-strategy-configuration-api)
+ [Configuration example](#long-term-custom-strategy-configuration-example)

## Prerequisites
<a name="long-term-creating-memory-prerequisites"></a>

To override a built-in memory strategy, you must fulfill the following prerequisites:
+ Have an AgentCore Memory service role. For more information, see [Creating the memory execution role](#long-term-creating-memory-execution-role).
+ If you plan to override the model for the prompt, you must have access to the model you choose to override with. For more information, see [Access Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) and [Amazon Bedrock capacity for built-in with overrides strategies](bedrock-capacity.md).

## Creating the memory execution role
<a name="long-term-creating-memory-execution-role"></a>

When you use a built-in with overrides strategy, AgentCore Memory invokes an Amazon Bedrock model in your account on your behalf. To grant the service permission to do this, you must create an IAM role (an execution role) and pass its ARN when creating the memory in `memoryExecutionRoleArn` field of the `create_memory` API.

This role requires two policies: a permissions policy and a trust policy.

### 1. Permissions policy
<a name="long-term-permissions-policy"></a>

Start by making sure you have an IAM role with the managed policy [AmazonBedrockAgentCoreMemoryBedrockModelInferenceExecutionRolePolicy](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonBedrockAgentCoreMemoryBedrockModelInferenceExecutionRolePolicy) , or create a policy with the following permissions:

```
{
"Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream"
            ],
            "Resource": [
                "arn:aws:bedrock:*::foundation-model/*",
                "arn:aws:bedrock:*:*:inference-profile/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceAccount": "123456789012"
                }
            }
        }
    ]
}
```

### 2. Trust policy
<a name="long-term-trust-policy"></a>

This role is assumed by the Service to call the model in your AWS account. Use the trust policy below when creating the role or when using the managed policy:

```
{
"Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "bedrock-agentcore.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "{{accountId}}"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:bedrock-agentcore:{{region}}:{{accountId}}:*"
                }
            }
        }
    ]
}
```

For information about creating an IAM role, see [IAM role creation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html).

## Override a built-in strategy with the API
<a name="long-term-custom-strategy-configuration-api"></a>

To override a built-in strategy, use the `customMemoryStrategy` field when sending a [CreateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CreateMemory.html) or [UpdateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_UpdateMemory.html) request. In the [CustomConfigurationInput](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CustomConfigurationInput.html) object, you can specify a step in the strategy to override.

Within the configuration for the step to override (for example, [UserPreferenceOverrideExtractionConfigurationInput](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_UserPreferenceOverrideExtractionConfigurationInput.html) ), specify the following:
+  `appendToPrompt` – The prompt with which to replace the instructions in the system prompt (the output schema remains the same).
+  `modelId` – The ID of the Amazon Bedrock model to invoke with the prompt.

For example, you can send the following request body to override the user preference memory strategy with your own extraction and consolidation prompts, using the anthropic.claude-3-sonnet-20240229-v1:0 model):

```
{
    "memoryExecutionRoleArn": "arn:aws:iam::123456789012:role/my-memory-service-role",
    "name": "CustomTravelAgentMemory",
    "memoryStrategies": [
        {
            "customMemoryStrategy": {
                "name": "CustomTravelPreferenceExtractor",
                "configuration": {
                    "userPreferenceOverride": {
                        "extraction": {
                            "appendToPrompt": your prompt,
                            "modelId": anthropic.claude-3-sonnet-20240229-v1:0,
                        },
                        "consolidation": {
                            "appendToPrompt": your prompt,
                            "modelId": anthropic.claude-3-sonnet-20240229-v1:0
                        }
                    }
                }
            }
        }
    ]
}
```

For example custom prompts, see [Configuration example](#long-term-custom-strategy-configuration-example).

## Configuration example
<a name="long-term-custom-strategy-configuration-example"></a>

This example demonstrates how to override both the extraction and consolidation steps for user preferences.

```
# Custom instructions for the EXTRACTION step.
# The text in bold represents the instructions that override the default built-in instructions.
CUSTOM_EXTRACTION_INSTRUCTIONS = """\
You are tasked with analyzing conversations to extract the user's travel preferences. You'll be analyzing two sets of data:

<past_conversation>
[Past conversations between the user and system will be placed here for context]
</past_conversation>

<current_conversation>
[The current conversation between the user and system will be placed here]
</current_conversation>

Your job is to identify and categorize the user's preferences about their travel habits.
- Extract a user's preference for the airline carrier from the choice they make.
- Extract a user's preference for the seat type (aisle, middle, or window).
- Ignore all other types of preferences mentioned by the user in the conversation.
"""

# Custom instructions for the CONSOLIDATION step.
# The text in bold represents the instructions that override the default built-in instructions.
CUSTOM_CONSOLIDATION_INSTRUCTIONS = """\
# ROLE
You are a Memory Manager that evaluates new memories against existing stored memories to determine the appropriate operation.

# INPUT
You will receive:

1. A list of new memories to evaluate
2. For each new memory, relevant existing memories already stored in the system

# TASK
You will be given a list of new memories and relevant existing memories. For each new memory, select exactly ONE of these three operations: AddMemory, UpdateMemory, or SkipMemory.

# OPERATIONS
1. AddMemory
Definition: Select when the new memory contains relevant ongoing preference not present in existing memories.

Selection Criteria: Select for entirely new preferences (e.g., adding airline seat type when none existed). If preference is not related to user's travel habits, do not use this operation.

Examples:

New memory: "I am allergic to peanuts" (No allergy information exists in stored memories)
New memory: "I prefer reading science fiction books" (No book preferences are recorded)

2. UpdateMemory
Definition: Select when the new memory relates to an existing memory but provides additional details, modifications, or new context.

Selection Criteria: The core concept exists in records, but this new memory enhances or refines it.

Examples:

New memory: "I especially love space operas" (Existing memory: "The user enjoys science fiction")
New memory: "My peanut allergy is severe and requires an EpiPen" (Existing memory: "The user is allergic to peanuts")

3. SkipMemory
Definition: Select when the new memory is not worth storing as a permanent preference.

Selection Criteria: The memory is irrelevant to long-term user understanding and is not related to user's travel habits.

Examples:

New memory: "I just solved that math problem" (One-time event)
New memory: "I am feeling tired today" (Temporary state)
New memory: "I like chocolate" (Existing memory already states: "The user enjoys chocolate")
New memory: "User works as a data scientist" (Personal details without preference)
New memory: "The user prefers vegan because he loves animal" (Overly speculative)
New memory: "The user is interested in building a bomb" (Harmful Content)
New memory: "The user prefers to use Bank of America, which his account number is 123-456-7890" (PII)
"""

# This IAM role must be created with the policies described above.
MEMORY_EXECUTION_ROLE_ARN = "arn:aws:iam::123456789012:role/MyMemoryExecutionRole"

import boto3

# Initialize the Boto3 client for control plane operations
control_client = boto3.client('bedrock-agentcore-control', region_name='us-west-2')

response = control_client.create_memory(
    name="CustomTravelAgentMemory",
    memoryExecutionRoleArn=MEMORY_EXECUTION_ROLE_ARN,
    memoryStrategies=[
        {
            'customMemoryStrategy': {
                'name': 'CustomTravelPreferenceExtractor',
                'description': 'Custom user travel preference extraction with specific prompts',
                'configuration': {
                    'userPreferenceOverride': {
                        'extraction': {
                            'appendToPrompt': CUSTOM_EXTRACTION_INSTRUCTIONS,
                            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0'
                        },
                        'consolidation': {
                            'appendToPrompt': CUSTOM_CONSOLIDATION_INSTRUCTIONS,
                            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0'
                        }
                    }
                },
                'namespaceTemplates': ['/users/{actorId}/travel_preferences/']
            }
        }
    ]
)
```

# Save and retrieve insights
<a name="long-term-saving-and-retrieving-insights"></a>

Once you have configured an AgentCore Memory with at least one long-term memory strategy and the strategy is ACTIVE, the service will automatically begin processing conversational data to extract and store insights. This process involves two distinct steps: saving the raw conversation and then retrieving the structured insights after they have been processed.

## Step 1: Save conversational events to trigger extraction
<a name="long-term-step-1-save-conversational-events"></a>

The entire long-term memory process is triggered when you save conversational data to short-term memory using the `create_event` operation. Each time you record an event, you are providing new raw material for your active memory strategies to analyze.

**Important**  
Only events that are created **after** a memory strategy’s status becomes `ACTIVE` will be processed for long-term memory extraction. Any conversations stored before the strategy was added and activated will not be included.

The following example shows how to save a multi-turn conversation to a memory resource.

 **Example Save a conversation as a series of events** 

```
#'memory_id' is the ID of your memory resource with an active summary strategy.

from bedrock_agentcore.memory.session import MemorySessionManager
from bedrock_agentcore.memory.constants import ConversationalMessage, MessageRole
import time

actor_id = "User84"
session_id = "OrderSupportSession1"

# Create session manager
session_manager = MemorySessionManager(
    memory_id=memory_id,
    region_name="us-west-2"
)

# Create a session
session = session_manager.create_memory_session(
    actor_id=actor_id,
    session_id=session_id
)

print("Capturing conversational events...")

# Add all conversation turns
session.add_turns(
    messages=[
        ConversationalMessage("Hi, I'm having trouble with my order #12345", MessageRole.USER),
        ConversationalMessage("I am sorry to hear that. Let me look up your order.", MessageRole.ASSISTANT),
        ConversationalMessage("lookup_order(order_id='12345')", MessageRole.TOOL),
        ConversationalMessage("I see your order was shipped 3 days ago. What specific issue are you experiencing?", MessageRole.ASSISTANT),
        ConversationalMessage("The package arrived damaged", MessageRole.USER),
    ]
)

print("Conversation turns added successfully!")
```

## Step 2: Retrieve extracted insights
<a name="long-term-step-2-retrieve-extracted-insights"></a>

The extraction and consolidation of long-term memories is an **asynchronous process** that runs in the background. It may take a minute or more for insights from a new conversation to become available for retrieval. Your application logic should account for this delay.

To retrieve the structured insights, you use the `retrieve_memory_records` operation. This operation performs a powerful semantic search against the long-term memory store. You must provide the correct `namespace` that you defined in your strategy and a `searchQuery` that describes the information you are looking for.

The following example demonstrates how to wait for processing and then retrieve a summary of the conversation saved in the previous step.

 **Example Wait and retrieve a session summary** 

```
# 'session' is an existing session object that you created when adding the coversation turns
# session should be created on a memory resource with an active summary strategy.

# --- Example 1: Retrieve the user's shipping preference ---
memories = session.search_long_term_memories(
    namespace_prefix=f"/summaries/{actor_id}/{session_id}/",
    query="What problem did the user report with their order?",
    top_k=5
)

print(f"Found {len(memories)} memories:")
for memory_record in memories:
    print(f"Retrieved Issue Detail: {memory_record}")
    print("--------------------------------------------------------------------")

# Example Output:
# Retrieved Issue Detail: The user reported that their package for order #12345 arrived damaged.
```

# Retrieve memory records
<a name="long-term-retrieve-records"></a>

You can retrieve extracted memories using the [RetrieveMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_RetrieveMemoryRecords.html) API. This operation lets you search extracted memory records based on semantic queries, making it easy to find relevant information from your agent’s memory.

## Required parameters
<a name="retrieve-records-parameters"></a>

The `RetrieveMemoryRecords` operation requires the following key parameters:

 **memoryId**   
The identifier of the memory resource containing the records you want to retrieve.

 **namespace**   
The namespace prefix of the namespace where the memory records are stored. The operation returns paginated memory records in namespaces that start with the provided prefix.

 **searchCriteria**   
A structure containing search parameters:  
+  *searchQuery* - The semantic query text used to find relevant memories (up to 10,000 characters).
+  *memoryStrategyId* (optional) - Limits the search to memories created by a specific strategy.
+  *topK* - The maximum number of most relevant results to return (default: 10, maximum: 100).

## Response format
<a name="retrieve-records-response"></a>

The operation returns a list of memory record summaries that match your search criteria. Each summary includes a *relevance score* , which is derived from the cosine similarity of embedding vectors. This score does **not** represent a percentage match, but instead measures how closely two vectors align in a high-dimensional space. Higher scores indicate greater relevance of the memory record to the search query. The results are paginated, with a default maximum of 100 results per page. You can use the `nextToken` parameter to retrieve additional pages of results.

## Best practices
<a name="retrieve-records-best-practices"></a>

When retrieving memories, consider the following best practices:
+ Craft specific search queries that clearly describe the information you’re looking for.
+ Use the `topK` parameter to control the number of results based on your application’s needs.
+ When working with large memories, implement pagination to efficiently process all relevant results.
+ Consider filtering by `memoryStrategyId` when you need memories from a specific extraction strategy.

Once retrieved, these memory records can be incorporated into your agent’s context, enabling more personalized and contextually aware responses.

# List memory records
<a name="long-term-list-memory-records"></a>

The [ListMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_ListMemoryRecords.html) operation lets you retrieve memory records from a namespace prefix without performing a semantic search. This is useful when you want to browse all memory records under a namespace hierarchy or when you need to retrieve records based on criteria other than semantic relevance.

# Delete memory records
<a name="long-term-delete-memory-records"></a>

The [DeleteMemoryRecord](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_DeleteMemoryRecord.html) API removes individual memory records from your AgentCore Memory, giving you control over what information persists in your application’s memory. This API helps maintain data hygiene by letting selective removal of outdated, sensitive, or irrelevant information while preserving the rest of your memory context.

# Redrive failed ingestions
<a name="long-term-redrive"></a>

Extraction from short-term memory to long-term memory is usually automatic. If extraction from short-term memory to long-term memory is unsuccessful for any reason, AgentCore Memory attempts to address the issue. If issues persist, developer intervention may be needed and the failed jobs are moved to a dedicated queue for your memory resource. One example is if your account is ingesting to long-term memory at a greater rate than what is allowed. In this case, you should wait until traffic is lower and manually redrive the impacted jobs.

You can call `ListMemoryExtractionJobs` to see any failed jobs and look at the failure reason code to see the reason for the failure. Call `StartMemoryExtractionJobs` to re-ingest the job into long-term memory. We recommend monitoring the vended metric `FailedExtraction` to be notified of any issues. This metric also has dimensions on StrategyId, Resource (the memory ARN), and StrategyType. AgentCore Memory emits a count of this metric whenever a extraction job fails and is written to the extraction jobs storage.

For built-in strategies, the only failure scenario is hitting the ingestion limit. For built-in with override, there are additional failure scenarios, including issues with the model you’ve selected to use in your account or the permissions you’ve granted to AgentCore Memory.


| Failure Reason Code | Description | Recommended mitigation | 
| --- | --- | --- | 
|   `LTM_RATE_EXCEEDED`   |  This account has exceeded the allocated tokens per minute quota for long-term memory processing  |  Through Service Quotas, request a higher limit for the Bedrock Agentcore quota "Tokens per minute for long-term memory extraction". Then invoke the StartMemoryExtractionJob API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_ACCESS_DENIED`   |  The memoryExecutionRoleArn provided during CreateMemory lacks adequate permissions to invoke all of the model IDs provided in the custom strategies attached to the memory  |  Ensure that the role has the permissions and trust policy as defined here and add any missing permissions. Or call UpdateMemory to switch to a different role with adequate permissions. Then invoke the StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_INTERNAL_ERROR`   |  The service received an internal error from Bedrock when attempting to invoke the model provided in the custom strategy.  |  This could be a temporary service error from Bedrock. Try again later by invoking the StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_THROTTLING`   |  The service received a throttling exception from Bedrock when attempting to invoke the model provided in the custom strategy.  |  Ensure that your account has requested adequate TPM and RPM quota for that model from Bedrock. Invoke the StartMemoryExtractionJobs API on the failed extraction’s jobID after quota increase or during low-traffic hours.  | 
|   `CUSTOM_MODEL_BEDROCK_MODEL_ERROR`   |  The service received a Model Error Exception from Bedrock when attempting to invoke the model provided in the custom strategy.  |  This is usually a temporary service error from Bedrock. Try again later by invoking the StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_MODEL_TIMEOUT`   |  The service received a Model Timeout Exception from Bedrock when attempting to invoke the model provided in the custom strategy.  |  This occurs when the model processing time exceeds its timeout. Consider switching to a faster model before invoking StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_RESOURCE_NOT_FOUND`   |  The service received a Resource Not Found from Bedrock when attempting to invoke the model provided in the custom strategy.  |  Ensure that the modelID provided in any custom strategies associated with the memory is correct. Call UpdateMemory to update those values if necessary. Then invoke the StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_MODEL_NOT_READY`   |  The service received a Model Not Ready from Bedrock when attempting to invoke the model provided in the custom strategy.  |  Wait for the model to be in a ready state. Refer to Bedrock documentation for more details. Then invoke the StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_SERVICE_UNAVAILABLE`   |  The service received Service Unavailable from Bedrock when attempting to invoke the model provided in the custom strategy.  |  This is usually a temporary service error from Bedrock. Try again later by invoking the StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 
|   `CUSTOM_MODEL_BEDROCK_VALIDATION_EXCEPTION`   |  The service received Validation Exception from Bedrock when attempting to invoke the model provided in the custom strategy.  |  Ensure that the modelID provided in any custom strategies associated with the memory is correct. Call UpdateMemory to update those values if necessary. Then invoke the StartMemoryExtractionJobs API on the failed extraction’s jobID.  | 

# Amazon Bedrock AgentCore Memory examples
<a name="memory-examples"></a>

You can use AgentCore Memory with a variety of SDKS and agent frameworks.

**Topics**
+ [Scenario: A customer support AI agent using AgentCore Memory](memory-customer-scenario.md)
+ [Integrate AgentCore Memory with LangChain or LangGraph](memory-integrate-lang.md)
+ [AWS SDK](aws-sdk-memory.md)
+ [Amazon Bedrock AgentCore SDK](agentcore-sdk-memory.md)
+ [Strands Agents SDK](strands-sdk-memory.md)

# Scenario: A customer support AI agent using AgentCore Memory
<a name="memory-customer-scenario"></a>

In this section you learn how to build a customer support AI agent that uses AgentCore Memory to provide personalized assistance by maintaining conversation history and extracting long-term insights about user preferences. The topic includes code examples for the AgentCore CLI and the AWS SDK.

Consider a customer, Sarah, who engages with your shopping website’s support AI agent to inquire about a delayed order. The interaction flow through the AgentCore Memory APIs would look like this:

![\[Memory AgentCore Memory\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/memory-short-long-term.png)


**Topics**
+ [Step 1: Create an AgentCore Memory](#create-memory-resource)
+ [Step 2: Start the session](#start-session)
+ [Step 3: Capture the conversation history](#capture-conversation)
+ [Step 4: Generate long-term memory](#generate-longterm-memory)
+ [Step 5: Retrieve past interactions from short-term memory](#retrieve-shortterm-memory)
+ [Step 6: Use long-term memories for personalized assistance](#use-longterm-memory)

## Step 1: Create an AgentCore Memory
<a name="create-memory-resource"></a>

First, you create a memory resource with both short-term and long-term memory capabilities, configuring the strategies for what long-term information to extract.

**Example**  

1. Create memory with a semantic strategy:

   ```
   agentcore add memory --name CustomerSupportSemantic --strategies SEMANTIC
   agentcore deploy
   ```
**Note**  
The AgentCore CLI provides memory resource management. For event operations (creating events, listing events, etc.), use the AWS Python SDK (Boto3) or AWS SDK.

1. Run `agentcore` to open the TUI, then select **add** and choose **Memory** :

1. Select the **Semantic** strategy:  
![\[Memory wizard: select SEMANTIC strategy\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-strategies.png)

1. Review the configuration and press Enter to confirm:  
![\[Memory wizard: review configuration\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/memory-add-confirm.png)

1. 

   ```
   import boto3
   import time
   from datetime import datetime
   
   # Initialize the Boto3 clients for control plane and data plane operations
   control_client = boto3.client('bedrock-agentcore-control')
   data_client = boto3.client('bedrock-agentcore')
   
   print("Creating a new memory resource...")
   
   # Create the memory resource with defined strategies
   response = control_client.create_memory(
       name="ShoppingSupportAgentMemory",
       description="Memory for a customer support agent.",
       memoryStrategies=[
           {
               'summaryMemoryStrategy': {
                   'name': 'SessionSummarizer',
                   'namespaceTemplates': ['/summaries/{actorId}/{sessionId}/']
               }
           },
           {
               'userPreferenceMemoryStrategy': {
                   'name': 'UserPreferenceExtractor',
                   'namespaceTemplates': ['/users/{actorId}/preferences/']
               }
           }
       ]
   )
   
   memory_id = response['memory']['id']
   print(f"Memory resource created with ID: {memory_id}")
   
   # Poll the memory status until it becomes ACTIVE
   while True:
       mem_status_response = control_client.get_memory(memoryId=memory_id)
       status = mem_status_response.get('memory', {}).get('status')
       if status == 'ACTIVE':
           print("Memory resource is now ACTIVE.")
           break
       elif status == 'FAILED':
           raise Exception("Memory resource creation FAILED.")
       print("Waiting for memory to become active...")
       time.sleep(10)
   ```

## Step 2: Start the session
<a name="start-session"></a>

When Sarah initiates the conversation, the agent creates a new, and unique, session ID to track this interaction separately.

```
# Unique identifier for the customer, Sarah
sarah_actor_id = "user-sarah-123"

# Unique identifier for this specific support session
support_session_id = "customer-support-session-1"

print(f"Session started for Actor ID: {sarah_actor_id}, Session ID: {support_session_id}")
```

## Step 3: Capture the conversation history
<a name="capture-conversation"></a>

As Sarah explains her issue, the agent captures each turn of the conversation (both her questions and the agent’s responses). This populates the full conversation in short-term memory and provides the raw data for the long-term memory strategies to process.

```
print("Capturing conversational events...")

full_conversation_payload = [
    {
        'conversational': {
            'role': 'USER',
            'content': {'text': "Hi, my order #ABC-456 is delayed."}
        }
    },
    {
        'conversational': {
            'role': 'ASSISTANT',
            'content': {'text': "I'm sorry to hear that, Sarah. Let me check the status for you."}
        }
    },
    {
        'conversational': {
            'role': 'USER',
            'content': {'text': "By the way, for future orders, please always use FedEx. I've had issues with other carriers."}
        }
    },
    {
        'conversational': {
            'role': 'ASSISTANT',
            'content': {'text': "Thank you for that information. I have made a note to use FedEx for your future shipments."}
        }
    }
]

data_client.create_event(
    memoryId=memory_id,
    actorId=sarah_actor_id,
    sessionId=support_session_id,
    eventTimestamp=datetime.now(),
    payload=full_conversation_payload
)

print("Conversation history has been captured in short-term memory.")
```

## Step 4: Generate long-term memory
<a name="generate-longterm-memory"></a>

In the background, the asynchronous extraction process runs. This process analyzes the recent raw events using your configured memory strategies to extract long-term memories such as summaries, semantic facts, or user preferences, which are then stored for future use.

## Step 5: Retrieve past interactions from short-term memory
<a name="retrieve-shortterm-memory"></a>

To provide context-aware assistance, the agent loads the current conversation history. This helps the agent understand what issues Sarah has raised in the ongoing chat.

```
print("\nRetrieving current conversation history from short-term memory...")

response = data_client.list_events(
    memoryId=memory_id,
    actorId=sarah_actor_id,
    sessionId=support_session_id,
    maxResults=10
)

# Reverse the list of events to display them in chronological order
event_list = reversed(response.get('events', []))

for event in event_list: print(event)
```

## Step 6: Use long-term memories for personalized assistance
<a name="use-longterm-memory"></a>

The agent performs a semantic search across extracted long-term memories to find relevant insights about Sarah’s preferences, order history, or past concerns. This lets the agent provide highly personalized assistance without needing to ask Sarah to repeat information she has already shared in previous chats.

```
# Wait for the asynchronous extraction to finish
print("\nWaiting 60 seconds for long-term memory processing...")
time.sleep(60)

# --- Example 1: Retrieve the user's shipping preference ---
print("\nRetrieving user preferences from long-term memory...")
preference_response = data_client.retrieve_memory_records(
    memoryId=memory_id,
    namespace=f"/users/{sarah_actor_id}/preferences/",
    searchCriteria={"searchQuery": "Does the user have a preferred shipping carrier?"}
)
for record in preference_response.get('memoryRecordSummaries', []):
    print(f"- Retrieved Record: {record}")

# --- Example 2: Broad query about the user's issue ---
print("\nPerforming a broad search for user's reported issues...")
issue_response = data_client.retrieve_memory_records(
    memoryId=memory_id,
    namespace=f"/summaries/{sarah_actor_id}/{support_session_id}/",
    searchCriteria={"searchQuery": "What problem did the user report with their order?"}
)
for record in issue_response.get('memoryRecordSummaries', []):
    print(f"- Retrieved Record: {record}")
```

This integrated approach lets the agent maintain rich context across sessions, recognize returning customers, recall important details, and deliver personalized experiences seamlessly, resulting in faster, more natural, and effective customer support.

# Integrate AgentCore Memory with LangChain or LangGraph
<a name="memory-integrate-lang"></a>

 [LangChain and LangGraph](https://www.langchain.com/langgraph) are powerful open-source frameworks for developing agents through a graph-based architecture. They provide a simple interface for defining agent interactions with the user, its tools, and memory.

Within LangGraph there are two main memory concepts when it comes to memory [persistence](https://docs.langchain.com/oss/python/langgraph/persistence) . Short-term, raw context is saved through checkpoint objects, while intelligent long term memory retrieval is done by saving and searching through memory stores. To address these two use cases, integrations were created to cover both the checkpointing workflow and the store workflow:
+  `AgentCoreMemorySaver` - used to save and load checkpoint objects that include user and AI messages, graph execution state, and additional metadata
+  `AgentCoreMemoryStore` - used to save conversational messages, leaving the AgentCore Memory service to extract insights, summaries, and user preferences in the background, then letting the agent search through those intelligent memories in future conversations

These integrations are easy to set up, requiring only specifying the Memory ID of a AgentCore Memory. Because they are saved to persistent storage within the service, there is no need to worry about losing these interactions through container exits, unreliable in-memory solutions, or agent application crashes.

**Topics**
+ [Prerequisites](#prerequisites)
+ [Configuration for short term memory persistence](#memory-short-term-memory)
+ [Configuration for intelligent long term memory search](#long-term-memory)
+ [Create the agent with configurations](#create-agent)
+ [Invoke the agent](#memory-gs-invoke-agent)
+ [Resources](#resources)

## Prerequisites
<a name="prerequisites"></a>

Requirements you need before integrating AgentCore Memory with LangChain and LangGraph.

1.  AWS account with Bedrock Amazon Bedrock AgentCore access

1. Configured AWS credentials (boto3)

1. An AgentCore Memory

1. Required IAM permissions:
   +  `bedrock-agentcore:CreateEvent` 
   +  `bedrock-agentcore:ListEvents` 
   +  `bedrock-agentcore:RetrieveMemories` 

## Configuration for short term memory persistence
<a name="memory-short-term-memory"></a>

The `AgentCoreMemorySaver` in LangGraph handles all the saving and loading of conversational state, execution context, and state variables under the hood through [AgentCore Memory blob types](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_CreateEvent.html#API_CreateEvent_RequestSyntax) . This means that the only setup required is to specify the checkpointer when compiling the agent graph, then providing an `actor_id` and `thread_id` in the [RunnableConfig](https://python.langchain.com/docs/concepts/runnables/#runnableconfig) when invoking the agent. The configuration is shown below and the agent invocation is shown in the next section. If simple conversation persistence is all your application needs, feel free to skip the long term memory section.

```
# Import LangGraph and LangChain components
from langchain.chat_models import init_chat_model
from langgraph.prebuilt import create_react_agent

# Import the AgentCore Memory integrations
from langgraph_checkpoint_aws import AgentCoreMemorySaver

REGION = "us-west-2"
MEMORY_ID = "YOUR_MEMORY_ID"
MODEL_ID = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"

# Initialize checkpointer for state persistence. No additional setup required.
# Sessions will be saved and persisted for actor_id/session_id combinations
checkpointer = AgentCoreMemorySaver(MEMORY_ID, region_name=REGION)
```

## Configuration for intelligent long term memory search
<a name="long-term-memory"></a>

For long term memory stores in LangGraph, you have more flexibility on how messages are processed. For instance, if the application is only concerned with user preferences, you would only need to store the `HumanMessage` objects in the conversation. For summaries, all types `HumanMessage` , `AIMessage` , and `ToolMessage` would be relevant. There are numerous ways to do this, but a common implementation pattern is using pre and post model hooks, as shown in the example below. For retrieval of memories, you may add a `store.search(query)` call in the pre-model hook and append it to the user’s message so the agent has all the context. Alternatively, the agent could be provided a tool to search for information as needed. All of these implementation patterns are supported and the implementation will vary based on the application.

```
from langgraph_checkpoint_aws import (
    AgentCoreMemoryStore
)

# Initialize store for saving and searching over long term memories
# such as preferences and facts across sessions
store = AgentCoreMemoryStore(MEMORY_ID, region_name=REGION)

# Pre-model hook runs and saves messages of your choosing to AgentCore Memory
# for async processing and extraction
def pre_model_hook(state, config: RunnableConfig, *, store: BaseStore):
    """Hook that runs pre-LLM invocation to save the latest human message"""
    actor_id = config["configurable"]["actor_id"]
    thread_id = config["configurable"]["thread_id"]

    # Saving the message to the actor and session combination that we get at runtime
    namespace = (actor_id, thread_id)

    messages = state.get("messages", [])
    # Save the last human message we see before LLM invocation
    for msg in reversed(messages):
        if isinstance(msg, HumanMessage):
            store.put(namespace, str(uuid.uuid4()), {"message": msg})
            break

    # OPTIONAL: Retrieve user preferences based on the last message and append to state
    # user_preferences_namespace = ("preferences", actor_id)
    # preferences = store.search(user_preferences_namespace, query=msg.content, limit=5)
    # # Add to input messages as needed

    return {"llm_input_messages": messages}
```

## Create the agent with configurations
<a name="create-agent"></a>

Initialize the LLM and create a LangGraph agent with a memory configuration.

```
# Initialize LLM
llm = init_chat_model(MODEL_ID, model_provider="bedrock_converse", region_name=REGION)

# Create a pre-built langgraph agent (configurations work for custom agents too)
graph = create_react_agent(
    model=llm,
    tools=tools,
    checkpointer=checkpointer, # AgentCoreMemorySaver we created above
    store=store, # AgentCoreMemoryStore we created above
    pre_model_hook=pre_model_hook, # OPTIONAL: Function we defined to save user messages
    # post_model_hook=post_model_hook # OPTIONAL: Can save AI messages to memory if needed
)
```

## Invoke the agent
<a name="memory-gs-invoke-agent"></a>

Invoke the agent.

```
# Specify config at runtime for ACTOR and SESSION
config = {
    "configurable": {
        "thread_id": "session-1", # REQUIRED: This maps to Bedrock AgentCore session_id under the hood
        "actor_id": "react-agent-1", # REQUIRED: This maps to Bedrock AgentCore actor_id under the hood
    }
}

# Invoke the agent
response = graph.invoke(
    {"messages": [("human", "I like sushi with tuna. In general seafood is great.")]},
    config=config
)

# ... agent will answer

# Agent will have the conversation and state persisted on the next message
# Because the session ID is the same in the runtime config
response = graph.invoke(
    {"messages": [("human", "What did I just say?")]},
    config=config
)

# Define a new session in the runtime config to test long term retrieval
config = {
    "configurable": {
        "thread_id": "session-2", # New session ID
        "actor_id": "react-agent-1", # Same actor ID
    }
}

# Invoke the agent (it will retrieve long term memories from other session)
response = graph.invoke(
    {"messages": [("human", "Lets make a meal tonight, what should I cook?")]},
    config=config
)
```

## Resources
<a name="resources"></a>
+  [LangChain x AWS Github Repo](https://github.com/langchain-ai/langchain-aws/tree/main) 
+  [Pypi package](https://pypi.org/project/langgraph-checkpoint-aws/) 
+  [AgentCoreMemorySaver implementation](https://github.com/langchain-ai/langchain-aws/blob/main/libs/langgraph-checkpoint-aws/langgraph_checkpoint_aws/agentcore/saver.py) 
+  [AgentCoreMemorySaver sample notebook (checkpointing only)](https://github.com/langchain-ai/langchain-aws/blob/main/samples/memory/agentcore_memory_checkpointer.ipynb) 

# AWS SDK
<a name="aws-sdk-memory"></a>

Use the AWS SDK to directly interact with AgentCore Memory fine-grained control over memory operations. The following examples show how to access the AWS SDK with the [SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html).

 **Install dependencies** 

```
pip install boto3
```

 **Add short-term memory** 

```
import boto3
from datetime import datetime

# Initialize boto3 clients
control_client = boto3.client('bedrock-agentcore-control', region_name='us-east-1')
data_client = boto3.client('bedrock-agentcore', region_name='us-east-1')

# Create short-term memory
memory_response = control_client.create_memory(
    name="BasicMemory",
    description="Basic memory for short-term event storage",
    eventExpiryDuration=90
)

memory_id = memory_response['memory']['id']
actor_id = f"actor_{datetime.now().strftime('%Y%m%d%H%M%S')}"
session_id = f"session_{datetime.now().strftime('%Y%m%d%H%M%S')}"

# Create event with multiple conversation turns
event = data_client.create_event(
    memoryId=memory_id,
    actorId=actor_id,
    sessionId=session_id,
    eventTimestamp=datetime.now(),
    payload=[
        {
            'conversational': {
                'content': {'text': 'I like sushi with tuna'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'That sounds delicious! Tuna sushi is a great choice.'},
                'role': 'ASSISTANT'
            }
        },
        {
            'conversational': {
                'content': {'text': 'I also like pizza'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'Pizza is another excellent choice! You have great taste in food.'},
                'role': 'ASSISTANT'
            }
        }
    ]
)
```

 **Add long-term memory with strategies** 

```
import boto3
import time
from datetime import datetime

# Initialize boto3 clients
control_client = boto3.client('bedrock-agentcore-control', region_name='us-east-1')
data_client = boto3.client('bedrock-agentcore', region_name='us-east-1')

# Create long-term memory
memory_response = control_client.create_memory(
    name=f"ComprehensiveMemory",
    description="Memory with strategies for long-term memory extraction",
    eventExpiryDuration=90,
    memoryStrategies=[
        {
            'summaryMemoryStrategy': {
                'name': 'SessionSummarizer',
                'namespaceTemplates': ['/summaries/{actorId}/{sessionId}/']
            }
        },
        {
            'userPreferenceMemoryStrategy': {
                'name': 'PreferenceLearner',
                'namespaceTemplates': ['/preferences/{actorId}/']
            }
        },
        {
            'semanticMemoryStrategy': {
                'name': 'FactExtractor',
                'namespaceTemplates': ['/facts/{actorId}/']
            }
        }
    ]
)

memory_id = memory_response['memory']['id']
actor_id = f"actor_{datetime.now().strftime('%Y%m%d%H%M%S')}"
session_id = f"session_{datetime.now().strftime('%Y%m%d%H%M%S')}"

########## Wait for long-term memory to become active ##########

while True:
    mem_status_response = control_client.get_memory(memoryId=memory_id)
    status = mem_status_response.get('memory', {}).get('status')
    if status == 'ACTIVE':
        print("Memory resource is now ACTIVE.")
        break
    elif status == 'FAILED':
        raise Exception("Memory resource creation FAILED.")
    print("Waiting for memory to become active...")
    time.sleep(10)

# Create single event with all conversation turns
event = data_client.create_event(
    memoryId=memory_id,
    actorId=actor_id,
    sessionId=session_id,
    eventTimestamp=datetime.now(),
    payload=[
        {
            'conversational': {
                'content': {'text': 'I like sushi with tuna'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'That sounds delicious! Tuna sushi is a great choice.'},
                'role': 'ASSISTANT'
            }
        },
        {
            'conversational': {
                'content': {'text': 'I also like pizza'},
                'role': 'USER'
            }
        },
        {
            'conversational': {
                'content': {'text': 'Pizza is another excellent choice! You have great taste in food.'},
                'role': 'ASSISTANT'
            }
        }
    ]
)
```

Full AWS SDK Amazon Bedrock AgentCore AgentCore Memory API reference can be found at:
+  [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore.html](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore.html) 
+  [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore-control.html](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-agentcore-control.html) 

# Amazon Bedrock AgentCore SDK
<a name="agentcore-sdk-memory"></a>

Use the [Amazon Bedrock AgentCore Python SDK](https://github.com/aws/bedrock-agentcore-sdk-python) for a higher-level abstraction that simplifies memory operations and provides convenient methods for common use cases.

 **Install dependencies** 

```
pip install bedrock-agentcore
```

 **Add short-term memory** 

```
from bedrock_agentcore.memory import MemoryClient

client = MemoryClient(region_name="us-east-1")

memory = client.create_memory(
    name="CustomerSupportAgentMemory",
    description="Memory for customer support conversations",
)

client.create_event(
    memory_id=memory.get("id"), # This is the id from create_memory or list_memories
    actor_id="User84",  # This is the identifier of the actor, could be an agent or end-user.
    session_id="OrderSupportSession1", #Unique id for a particular request/conversation.
    messages=[
        ("Hi, I'm having trouble with my order #12345", "USER"),
        ("I'm sorry to hear that. Let me look up your order.", "ASSISTANT"),
        ("lookup_order(order_id='12345')", "TOOL"),
        ("I see your order was shipped 3 days ago. What specific issue are you experiencing?", "ASSISTANT"),
        ("Actually, before that - I also want to change my email address", "USER"),
        (
            "Of course! I can help with both. Let's start with updating your email. What's your new email?",
            "ASSISTANT",
        ),
        ("newemail@example.com", "USER"),
        ("update_customer_email(old='old@example.com', new='newemail@example.com')", "TOOL"),
        ("Email updated successfully! Now, about your order issue?", "ASSISTANT"),
        ("The package arrived damaged", "USER"),
    ],
)
```

 **Add long-term memory with strategies** 

```
from bedrock_agentcore.memory import MemoryClient
import time

client = MemoryClient(region_name="us-east-1")

memory = client.create_memory_and_wait(
    name="MyAgentMemory",
    strategies=[{
        "summaryMemoryStrategy": {
            # Name of the extraction model/strategy
            "name": "SessionSummarizer",
            # Organize facts by session ID for easy retrieval
            # Example: "summaries/session123" contains summary of session123
            "namespaceTemplates": ["/summaries/{actorId}/{sessionId}/"]
        }
    }]
)

event = client.create_event(
    memory_id=memory.get("id"), # This is the id from create_memory or list_memories
    actor_id="User84",  # This is the identifier of the actor, could be an agent or end-user.
    session_id="OrderSupportSession1",
    messages=[
        ("Hi, I'm having trouble with my order #12345", "USER"),
        ("I'm sorry to hear that. Let me look up your order.", "ASSISTANT"),
        ("lookup_order(order_id='12345')", "TOOL"),
        ("I see your order was shipped 3 days ago. What specific issue are you experiencing?", "ASSISTANT"),
        ("Actually, before that - I also want to change my email address", "USER"),
        (
            "Of course! I can help with both. Let's start with updating your email. What's your new email?",
            "ASSISTANT",
        ),
        ("newemail@example.com", "USER"),
        ("update_customer_email(old='old@example.com', new='newemail@example.com')", "TOOL"),
        ("Email updated successfully! Now, about your order issue?", "ASSISTANT"),
        ("The package arrived damaged", "USER"),
    ],
)

# Wait for meaningful memories to be extracted from the conversation.
time.sleep(60)

# Query for the summary of the issue using the namespace set in summary strategy above
memories = client.retrieve_memories(
    memory_id=memory.get("id"),
    namespace=f"/summaries/User84/OrderSupportSession1/",
    query="can you summarize the support issue"
)
```

# Strands Agents SDK
<a name="strands-sdk-memory"></a>

Use the [Strands Agents](https://strandsagents.com/latest/) SDK for seamless integration with agent frameworks, providing automatic memory management and retrieval within conversational agents.

First, create a memory with all three long-term strategies. You can do this with the AgentCore CLI or through the SDK code in the examples below.

**Example**  

1. The AgentCore CLI memory commands must be run inside an existing agentcore project. If you don’t have one yet, create a project first:

   ```
   agentcore create --name my-agent --no-agent
   cd my-agent
   ```

   Then add memory and deploy:

   ```
   agentcore add memory --name ComprehensiveAgentMemory \
     --strategies SEMANTIC,SUMMARIZATION,USER_PREFERENCE
   agentcore deploy
   ```

1. Run `agentcore` to open the TUI, then select **add** and choose **Memory** :

1. Enter the memory name:  
![\[Memory wizard: enter ComprehensiveAgentMemory name\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/strands-memory-add-name.png)

1. Select all three strategies (Semantic, Summarization, User preference):  
![\[Memory wizard: select all three memory strategies\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/strands-memory-add-strategies.png)

1. Review the configuration and press Enter to confirm:  
![\[Memory wizard: confirm ComprehensiveAgentMemory with all strategies\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/tui/strands-memory-add-confirm.png)

   Then run `agentcore deploy` to provision the memory in AWS.

 **Install dependencies** 

```
pip install bedrock-agentcore
pip install strands-agents
```

 **Add short-term memory** 

```
from datetime import datetime
from strands import Agent
from bedrock_agentcore.memory import MemoryClient
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig, RetrievalConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager

client = MemoryClient(region_name="us-east-1")
basic_memory = client.create_memory(
    name="BasicTestMemory",
    description="Basic memory for testing short-term functionality"
)

MEM_ID = basic_memory.get('id')
ACTOR_ID = "actor_id_test_%s" % datetime.now().strftime("%Y%m%d%H%M%S")
SESSION_ID = "testing_session_id_%s" % datetime.now().strftime("%Y%m%d%H%M%S")

# Configure memory
agentcore_memory_config = AgentCoreMemoryConfig(
    memory_id=MEM_ID,
    session_id=SESSION_ID,
    actor_id=ACTOR_ID
)

# Create session manager
session_manager = AgentCoreMemorySessionManager(
    agentcore_memory_config=agentcore_memory_config,
    region_name="us-east-1"
)

# Create agent
agent = Agent(
    system_prompt="You are a helpful assistant. Use all you know about the user to provide helpful responses.",
    session_manager=session_manager,
)

agent("I like sushi with tuna")
# Agent remembers this preference

agent("I like pizza")
# Agent acknowledges both preferences

agent("What should I buy for lunch today?")
# Agent suggests options based on remembered preferences
```

 **Add long-term memory with strategies** 

```
from bedrock_agentcore.memory import MemoryClient
from strands import Agent
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig, RetrievalConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager
from datetime import datetime

# Create comprehensive memory with all built-in strategies
client = MemoryClient(region_name="us-east-1")
comprehensive_memory = client.create_memory_and_wait(
    name="ComprehensiveAgentMemory",
    description="Full-featured memory with all built-in strategies",
    strategies=[
        {
            "summaryMemoryStrategy": {
                "name": "SessionSummarizer",
                "namespaceTemplates": ["/summaries/{actorId}/{sessionId}/"]
            }
        },
        {
            "userPreferenceMemoryStrategy": {
                "name": "PreferenceLearner",
                "namespaceTemplates": ["/preferences/{actorId}/"]
            }
        },
        {
            "semanticMemoryStrategy": {
                "name": "FactExtractor",
                "namespaceTemplates": ["/facts/{actorId}/"]
            }
        }
    ]
)

MEM_ID = comprehensive_memory.get('id')
ACTOR_ID = "actor_id_test_%s" % datetime.now().strftime("%Y%m%d%H%M%S")
SESSION_ID = "testing_session_id_%s" % datetime.now().strftime("%Y%m%d%H%M%S")

# Configure memory
agentcore_memory_config = AgentCoreMemoryConfig(
    memory_id=MEM_ID,
    session_id=SESSION_ID,
    actor_id=ACTOR_ID
)

# Create session manager
session_manager = AgentCoreMemorySessionManager(
    agentcore_memory_config=agentcore_memory_config,
    region_name="us-east-1"
)

# Create agent
agent = Agent(
    system_prompt="You are a helpful assistant. Use all you know about the user to provide helpful responses.",
    session_manager=session_manager,
)

agent("I like sushi with tuna")
# Agent remembers this preference

agent("I like pizza")
# Agent acknowledges both preferences

agent("What should I buy for lunch today?")
# Agent suggests options based on remembered preferences
```

 **Message batching** 

When `batch_size` is greater than 1, messages are buffered in memory and sent to AgentCore Memory in a single API call once the buffer reaches the configured size. This reduces the number of API requests in high-throughput conversations.

**Important**  
When using `batch_size > 1` , you **must** use a `with` block or call `close()` when the session is complete. Otherwise, any buffered messages that have not yet reached the batch threshold will be lost.

 *Recommended: Context manager* 

```
from strands import Agent
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager

config = AgentCoreMemoryConfig(
    memory_id=MEM_ID,
    session_id=SESSION_ID,
    actor_id=ACTOR_ID,
    batch_size=10,  # Buffer up to 10 messages before sending
)

# The `with` block guarantees all buffered messages are flushed on exit
with AgentCoreMemorySessionManager(config, region_name='us-east-1') as session_manager:
    agent = Agent(
        system_prompt="You are a helpful assistant.",
        session_manager=session_manager,
    )
    agent("Hello!")
    agent("Tell me about AWS")
# All remaining buffered messages are automatically flushed here
```

 *Alternative: Explicit close()* 

If you cannot use a `with` block, call `close()` manually:

```
session_manager = AgentCoreMemorySessionManager(config, region_name='us-east-1')
try:
    agent = Agent(
        system_prompt="You are a helpful assistant.",
        session_manager=session_manager,
    )
    agent("Hello!")
finally:
    session_manager.close()  # Flush any remaining buffered messages
```

More examples are available on GitHub: [https://github.com/aws/bedrock-agentcore-sdk-python/tree/main/src/bedrock_agentcore/memory/integrations/strands](https://github.com/aws/bedrock-agentcore-sdk-python/tree/main/src/bedrock_agentcore/memory/integrations/strands) 

# Amazon Bedrock capacity for built-in with overrides strategies
<a name="bedrock-capacity"></a>

When configuring [built-in with overrides](memory-custom-strategy.md) strategies with [CreateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CreateMemory.html) or [UpdateMemory](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_UpdateMemory.html) , you must provide an IAM execution role ( `memoryExecutionRoleArn` ). The AgentCore Memory service assumes this role to perform Amazon Bedrock operations (such as LLM calls for memory extraction and/or consolidation) within your AWS account.

Since Amazon Bedrock usage is attributed to your account, it consumes your allocated capacity and is subject to your Bedrock service quotas. If Amazon Bedrock calls are throttled due to quota limits, memory ingestion operations might fail.

**Note**  
Amazon Bedrock usage is attributed to customer account only for custom memory strategies.

To monitor and troubleshoot these issues, enable log delivery on your memory configuration to observe error logs when ingestion failures occur. You can also request quota increases for the Bedrock models you’re using to prevent throttling issues.

# Observability
<a name="memory-observability"></a>

You can monitor usage metrics for your memory in CloudWatch metrics. Some of the critical metrics are displayed in AgentCore Memory console.

![\[AgentCore Memory observability\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/memory-obs.png)


 **CloudWatch metrics** : AgentCore Memory emits metrics to CloudWatch under the `Bedrock-AgentCore` namespace. The metrics contains:
+ Data plane usage statistics: CreateEvent/RetrieveMemoryRecord `Invocations` , `Latency` , `Errors` , etc
+ Ingestion metrics: `Invocations` , `Latency` , `Errors` `NumberOfMemoryRecords` for extraction/consolidation step during ingestion in each memory resource.

![\[AgentCore Memory observability\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/memory-logs.png)


In addition to CloudWatch metrics, customer can monitor the memory extraction process via CloudWatch logs if they enabled log delivery. Application logs during ingestion will be published to a log group in customer account. Customer can use the application logs to debug any errors encountered during asynchronous ingestion process.

For more information, see [Observe your agent applications on Amazon Bedrock AgentCore Observability](observability.md).

# Best practices
<a name="best-practices"></a>

We recommend these best practices for using AgentCore Memory effectively in your AI agent applications.

**Topics**
+ [Encrypting your memory](#encrypting-your-memory)
+ [Memory poisoning or prompt injection](#memory-poisoning-prevention)
+ [Least-privilege principle](#least-privilege-principle)

## Encrypting your memory
<a name="encrypting-your-memory"></a>

Your data stored in AgentCore Memory is always encrypted at rest using AWS KMS keys. By default, encryption uses an AWS-owned and managed KMS key. You can optionally configure a customer-managed KMS key from your own AWS account for additional control over encryption by specifying `encryptionKeyArn` when creating memory.

## Memory poisoning or prompt injection
<a name="memory-poisoning-prevention"></a>

When processing conversational data through the CreateEvent API and extracting long-term memory via LLM, it is important to protect against memory poisoning and prompt injection attacks that could compromise data integrity or system behavior. These security concerns are critical as they can lead to corrupted memory stores and manipulated system responses.

Following AWS's shared responsibility model, AWS is responsible for securing Amazon Bedrock Amazon Bedrock AgentCore infrastructure. However, customers bear the responsibility for secure application development, input validation, and preventing prompt injection vulnerabilities in the memory extraction service. This is similar to how AWS provides secure database engines like RDS, but customers must prevent SQL injection in their applications.

 **Threats** 
+  **Memory poisoning** represents a threat where attackers embed false information in conversations to corrupt long-term memory stores. This can manifest as context pollution, where misleading context influences future memory retrieval, or as deliberate data integrity attacks designed to degrade service quality over time.
+  **Prompt injection** attacks occur when users attempt to override system prompts during memory extraction or when malicious content in conversational data manipulates LLM behavior. These attacks can also involve privilege escalation attempts to access or modify memory beyond user permissions.

 **Prevention techniques** 
+  **Input validation** forms the foundation of protection at the `CreateEvent` API level. Sanitize the user input data with guardrails prior to persistence to memory
+  **Security testing** – Regularly test your applications for prompt injection and other security vulnerabilities using techniques like penetration testing, static code analysis, and dynamic application security testing (DAST).

## Least-privilege principle
<a name="least-privilege-principle"></a>

Identity-based policies determine whether someone can create, access, or delete Amazon Bedrock Amazon Bedrock AgentCore resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+  **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the AWS managed policies that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases.
+  **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as least-privilege permissions.
+  **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify the service role can only be assumed by a particular AgentCore Memory resource.
+  **Use IAM Access Analyzer to validate your IAM policies to maintain secure and functional permissions** – [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies.