

# Amazon Bedrock AgentCore
<a name="AgentCore-Agents"></a>

CloudWatch provides curated observability views for Amazon Bedrock AgentCore. These views help you monitor the operational health of your managed agents. You can use observability views and prompt tracing to debug and monitor AI agent performance. To monitor your AI agents, enable **Observability** in Amazon Bedrock AgentCore. Amazon Bedrock AgentCore includes a collection of modular services such as agents, memory, tools, and gateway. You can use these services together or independently to build your AI applications.

For more information on the modular services, see [Amazon Bedrock AgentCore](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html).

**Topics**
+ [

# Getting started
](AgentCore-GettingStarted.md)
+ [

# Agents
](Agents.md)
+ [

# Memory
](Memory.md)
+ [

# Built-in tools
](Built-in-tools.md)
+ [

# Gateways
](Gateways.md)
+ [

# Identity
](Identity.md)

# Getting started
<a name="AgentCore-GettingStarted"></a>

Amazon Bedrock AgentCore provides built-in metrics, logs, and traces to monitor the performance of your AgentCore modular services. You can view this data in Amazon CloudWatch. To access the full range of observability data from all AgentCore module services, instrument your code using the AWS Distro for OpenTelemetry (ADOT) SDK.

## Add observability to your agentic resources
<a name="add-observability-agentic-resources"></a>

Before you begin, enable CloudWatch Transaction Search. For more information, see [Enable Transaction Search](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Enable-TransactionSearch.html).

### Enable observability for AgentCore Runtime hosted agents
<a name="enable-observability-agentcore-runtime"></a>

You can host your agents on AgentCore Runtime, a secure, serverless runtime purpose-built for deploying and scaling dynamic AI agents and tools. AgentCore Runtime supports any open-source framework including LangGraph, CrewAI, Strands Agents, any protocol, and any model.

To enable observability for AgentCore Runtime hosted agents, see [Configure custom observability](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-configure.html#observability-configure-custom).

For a step-by-step tutorial, see [Enabling observability for AgentCore Runtime hosted agents](https://aws.github.io/bedrock-agentcore-starter-toolkit/user-guide/observability/quickstart.html#enabling-observability-for-agentcore-runtime-hosted-agents).

### Enable observability for non-AgentCore hosted agents
<a name="enable-observability-non-agentcore"></a>

You can host your agents outside of AgentCore and bring your observability data into CloudWatch for end-to-end monitoring in one location.

To enable observability for non-AgentCore Runtime hosted agents, see [Configure third-party observability](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-configure.html#observability-configure-3p).

For a step-by-step tutorial, see [Enabling observability for non-AgentCore hosted agents](https://aws.github.io/bedrock-agentcore-starter-toolkit/user-guide/observability/quickstart.html#enabling-observability-for-non-agentcore-hosted-agents).

### Enable observability for AgentCore memory, gateway, and built-in tool resources
<a name="enable-observability-agentcore-resources"></a>

You can gain visibility into the metrics and traces of AgentCore modular services. For more information, see [Configure CloudWatch observability](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-configure.html#observability-configure-cloudwatch).

### Enable AgentCore Evaluations
<a name="enable-observability-agentcore-evaluations"></a>

You can gain visibility into AgentCore Evaluations. AgentCore Evaluations provide capabilities to monitor and assess the performance, quality, and reliability of your AI agents. To enable observability for AgentCore Evaluations, see [ AgentCore evaluations](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/evaluations.html).

# View observability data in CloudWatch
<a name="view-observability-data-cloudwatch"></a>

After you enable observability for your agentic resources, you can view the collected data in CloudWatch.

## View the GenAI Observability dashboard
<a name="view-genai-observability-dashboard"></a>

1. Open the CloudWatch console.

1. Under the GenAI Observability dashboard, view data related to model invocations and agents on Amazon Bedrock AgentCore.

1. In the Amazon Bedrock AgentCore sub-menu, you can choose the following views:
   + **Agents View** – Lists all your agents, both on and off runtime. Choose an agent to view runtime metrics, sessions, traces, and evaluations specific to that agent
   + **Sessions View** – Navigate across all sessions associated with agents
   + **Traces View** – View traces and span information for agents. Choose a trace to explore the trace trajectory and timeline

## View logs
<a name="view-logs"></a>

1. Open the CloudWatch console.

1. In the navigation pane, expand **Logs** and choose **Log groups**.

1. Search for your agent's log group:
   + Standard logs (stdout/stderr) – `/aws/bedrock-agentcore/runtimes/<agent_id>-<endpoint_name>/[runtime-logs] <UUID>`
   + OTEL structured logs – `/aws/bedrock-agentcore/runtimes/<agent_id>-<endpoint_name>/runtime-logs`

## View traces and spans
<a name="view-traces-spans"></a>

1. Open the CloudWatch console.

1. In the navigation pane, choose **Transaction Search**.

1. Navigate to `/aws/spans/default`.

1. Filter by service name or other criteria.

1. Choose a trace to view the detailed execution graph.

## View metrics
<a name="view-metrics"></a>

1. Open the CloudWatch console.

1. In the navigation pane, choose **Metrics**.

1. Navigate to the **bedrock-agentcore** namespace.

1. Explore the available metrics.

# Protect sensitive data
<a name="mask-sensitive-data"></a>

Amazon CloudWatch Logs uses data protection policies to identify sensitive data and define actions to protect that data. You use data identifiers to select the sensitive data of interest. Amazon CloudWatch Logs then detects the sensitive data using machine learning and pattern matching. You can define audit and masking operations to log sensitive data findings and mask sensitive data when viewing log events.

For more information, see [Protecting sensitive log data with masking](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-data-protection-policies.html).

You can configure data protection for Amazon Bedrock AgentCore at the **account level** or at the **log group level**. With account level data protection, data protection rules are applied to all logs in your account. With log level data protection, data protection rules are can be applied to specific log groups in your account. This gives you granular control over how PII data is masked in your in your account.

**To configure data protection at the account level**

1. Open the Amazon CloudWatch console.

1. In the navigation pane, choose **Settings**.

1. Choose the **Logs** tab.

1. Choose **Configure the Data protection account policy**.

1. Specify the data identifiers that are relevant to your data.
   + To use a a predefined data identifier, in the **Managed data identifiers** drop-down, select the data identifiers that are relevant to your data.
   + To use a custom data identifier, choose **Add custom data identifier**, and then specify a name for the identifier and a Regex pattern for the data to protect.

1. (*Optional*) Choose a destination for the audit findings.
   + To send audit findings to a CloudWatch log, choose **Amazon CloudWatch Logs** and then select the destination log group.
   + To send audit findings to a Firehose stream, choose **Amazon Data Firehose** and then select the destination firehose stream.
   + To send audit findings to an Amazon S3 bucket, choose **Amazon S3** and then select the destination Amazon S3 bucket.

1. Choose **Activate data protection**.

**To configure data protection at the log group level**

1. Open the Amazon CloudWatch console.

1. In the navigation panel, choose **Logs**, **Log Management**.

1. Choose the **Log groups** tab, select the log group you want to enable data protection on, and then choose **Create data protection policy**.

1. Specify the data identifiers that are relevant to your data.
   + To use a a predefined data identifier, in the **Managed data identifiers** drop-down, select the data identifiers that are relevant to your data.
   + To use a custom data identifier, choose **Add custom data identifier**, and then specify a name for the identifier and a Regex pattern for the data to protect.

1. (*Optional*) Choose a destination for the audit findings.
   + To send audit findings to a CloudWatch log, choose **Amazon CloudWatch Logs** and then select the destination log group.
   + To send audit findings to a Firehose stream, choose **Amazon Data Firehose** and then select the destination firehose stream.
   + To send audit findings to an Amazon S3 bucket, choose **Amazon S3** and then select the destination Amazon S3 bucket.

1. Choose **Activate data protection**.

# Agents
<a name="Agents"></a>

You can use *Agents* to monitor agent performance, track their decision-making processes, analyze conversation flows, and troubleshoot issues through comprehensive metrics and tracing. This includes monitoring both Amazon Bedrock AgentCore managed agents and self-hosted or third-party agents that emit telemetry data to CloudWatch.

You can analyze various Agents and their associated interactions under **Agent view**, **Sessions view**, and **Traces view**. For more information, see [Understand observability for agentic resources in AgentCore ](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-telemetry.html).

**Topics**
+ [

# Agent view
](agent-view.md)
+ [

# Session view
](session-view.md)
+ [

# Traces view
](traces-view.md)

# Agent view
<a name="agent-view"></a>

The *Agent view* provides a curated dashboard for your account's agents. You can view data from agents hosted on AWS native services like AgentCore Runtime, Lambda, or Amazon EC2. The view also displays agents that emit telemetry to CloudWatch.

**Overview**

The metrics and dashboards show data from sampled agent spans. For information about agent spans, see [Spans](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-telemetry.html#agent_spans).

The following Agent metrics are supported:
+ Agents/Endpoints – Number of agents and aliases instrumented and emitting spans
+ Sessions – Number of sessions created by instrumented agents emitting spans. A session is similar to a conversation and contains the broad context
+ Traces – Number of traces created by instrumented agents emitting spans. A trace is a individual request-response cycle within a session
+ Error rate – Percentage of errors in agent interactions
+ Throttle rate – Percentage of throttled agent interactions

Choose **View details** to see the Agent metrics in graphs.

![\[Agents view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/GenAI_AgentCoreGraphs.png)


**Runtime metrics**

The Runtime metrics and dashboards display data from the Runtime primitive. Using this primitive, you can host your agents on the Amazon Bedrock AgentCore runtime. For more information, see [Creating an AgentCore Runtime ](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agents-runtime-create.html).

AgentCore Runtime supports these metrics
+ Runtime Agents/Aliases – Tracks number of agents and aliases hosted on AgentCore Runtime
+ Runtime sessions – Tracks number of sessions created by agents running in AgentCore Runtime. A session is similar to a conversation and contains the broad context of the entire interaction flow. Useful for monitoring overall platform usage, capacity planning, and understanding user engagement patterns
+ Runtime invocations – Total number of requests made to the Data Plane API. Each API call counts as one invocation, regardless of the request payload size or response status
+ Runtime errors – The number of system and user errors. For system and user error definitions, see [AgentCore provided runtime metrics](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/observability-runtime-metrics.html)
+ Runtime throttles – The number of requests throttled by the service due to exceeding allowed TPS (Transactions Per Second). These requests return ThrottlingException with HTTP status code 429. Monitor this metric to determine if you need to review your service quotas or optimize request patterns

View metric changes over time in the default dashboard. Expand **View details** to display metric graphs.

![\[Runtime view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/GenAI_Runtime.png)


**Agents**

Agents are components that collect and send monitoring data from your applications. The Agents table displays all agents configured in your account. These agents can be hosted on AWS native services like AgentCore Runtime, Lambda, or Amazon EC2. The table also displays other agents that are instrumented to emit telemetry to CloudWatch.

You can use **Filter agents** to find a specific agent that you want to deep dive or you can also use the column names to sort the agents to find the required agent. Select the gear icon to show or hide additional columns.

![\[Runtime agents view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/GenAI_agents.png)


You can view the details of the Agent by expanding the agent name.

![\[Runtime agents overview\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/GenAI_agentsdetails_new.png)


**Agent details- Overview**

The Overview tab displays automatic dashboards for your agent metrics. These metrics come from sampled spans and Runtime metrics (when the agent uses AgentCore Runtime).

The **Evaluators** dashboard includes insights derived from spans with evaluations enabled.
+ Top deltas in evaluator scores — Shows the agent evaluators that experienced the most change since the last period based on the time period you selected.
+ Evaluation configuration metrics — Show the operational status metrics for the agent evaluators, including the number of times the evaluations were executed and the number of errors encountered.

To edit an evaluation configuration using the Amazon Bedrock AgentCore console, click the link in the **Evaluator** or **Evaluation configuration** column. To review the evaluator results, click a score in the **Avg. score** column. To view all evaluations for the agent, choose the **Evaluations** tab. For more information, see [Agent details - Evaluations](session-traces-evaluations.md).

The **Agent metrics** dashboard include metrics which are derived from sampled spans:
+ Sessions and Traces – Count of sessions and traces for this agent
+ FM token usage – Total count of Foundational Model token consumption. You can filter the chart into a particular Foundational Model
+ System and client errors – Count of system errors during request processing. High levels of server-side errors can indicate potential infrastructure or service issues that require investigation. Client errors are errors resulting from invalid requests. High levels of client-side errors can indicate issues with request formatting or permissions
+ Errors and latency by span – The error rates and latency by a particular span. Note that a span can appear in many agents
+ Throttles – Number of requests throttled by the service due to exceeding allowed TPS (Transactions Per Second)
+ Inbound Auth:Authorization and access token calls – Number of incoming authentication requests processed by the agent, including authorization checks and access token validations from external clients or services
+ Outbound Auth:Usage distribution – Distribution pattern of outbound authentication methods used by the agent, showing the frequency and types of authentication mechanisms employed when accessing external services

The **Runtime metrics** dashboard includes metrics that AgentCore Runtime automatically generates:
+ Runtime sessions and invocations – Count of sessions and invocations that this particular agent has generated while being hosted on Runtime
+ Runtime latency – Latency of requests by agents hosted on Runtime
+ Runtime throttles – Number of requests throttle by the service due to exceeding allowed TPS (Transactions Per Second)

# Agent details - Sessions
<a name="session-sessions-view"></a>

An Agent can have several Sessions. View session in the *Sessions* tab. Use the **Filter sessions** or sort the columns to find the required session.

Choose the **Session ID** to view the session summary metrics and the list of traces belonging to that session. Session metrics include:
+ Traces – Number of traces belonging to the sessions
+ Server errors – Count of system errors during request processing. High levels of server-side errors can indicate potential infrastructure or service issues that require investigation
+ Client errors – Client errors are errors resulting from invalid requests. High levels of client-side errors can indicate issues with request formatting or permissions
+ Throttles – Number of requests throttled relevant to this session due to exceeding allowed TPS (Transactions Per Second)
+ Sessions details – Meta data about the session such as start time, end time, and session ID

To analyze a list of Traces in a session, choose **Filter traces** to narrow down or sort the table columns to bubble up the particular Trace you want to investigate.

After you select a Trace, the right-pane displays the details of the Trace. For each Trace, you can see the Trace summary, Spans, and Trace content details.

Under **Trace summary**, you can view the following metrics:

**Note**  
Summary page fields are consistent across **Agent view**, **Sessions view**, and **Traces view**.
+ Spans – Number of spans within a Trace
+ Server errors – Count of system errors during request processing. High levels of server-side errors can indicate potential infrastructure or service issues that require investigation
+ Client errors – Client errors are errors resulting from invalid requests. High levels of client-side errors can indicate issues with request formatting or permissions
+ Throttles – Number of requests throttle relevant to this session due to exceeding allowed TPS (Transactions Per Second)
+ P95 span latency – The 95-percentile latency of across all invocation of this particular span. Note that a span can be used across many agents
+ Trace details – Meta data about the trace such as start time, end time, and Trace ID

![\[Span view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/GenAI_span.png)


Choose **Timeline** to view the duration of each span and to understand the span that took the longest and contributed to a slow response.

![\[Trajectory view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/GenAI_agenttrajectory.png)


To analyze span relationships and subsequent calls choose **Trajectory** to understand the interconnected relationship of the spans and subsequent calls from these spans.

Under **Spans**, select an individual span event to review the span data in its original form. Review the span data in its original form. For granular troubleshooting, select the **Events** tab to examine model inputs and outputs.

# Agent details - Traces
<a name="session-traces-view"></a>

Each agent might have multiple traces. View trace details in the **Traces** tab. Choose **Filter traces** or sort the columns to find the required Trace.

![\[Trace summary view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Trace-summary.png)


# Agent details - Evaluations
<a name="session-traces-evaluations"></a>

Evaluations provides continuous quality monitoring metrics for your AI agents. You can use the information provided by the dashboard to assess the performance, quality, and reliability of your AI agents. 

Instead of relying on simulated test cases, evaluations capture real user sessions and agent interactions, providing a comprehensive view of agent performance, from input to final output. With agent evaluations, you can define sampling rules to evaluate only a percentage of the sessions or traces, and then apply a variety of evaluators to asses and score an AI agent's operational performance. The resulting assessments and scores are displayed in the Evaluations dashboard, allowing you to monitor trends, identify potential quality issues, set alarms, and investigate and diagnose potential issues.

The Evaluations dashboard lists all of the evaluations that have been enabled and configured for the selected agent. For more information about configuring evaluations for an agent, see [ AgentCore evaluations](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/evaluations.html). You can expand each evaluation to view the sessions, traces, and spans that were evaluated. 

![\[Evaluations\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/evals_overview.png)


**Topics**
+ [

## Evaluations details
](#session-traces-evaluations-details)
+ [

## Evaluations graphs
](#session-traces-evaluations-graphs)
+ [

## Work with evaluation results
](#session-traces-evaluations-raw-results)

## Evaluations details
<a name="session-traces-evaluations-details"></a>

For each evaluation, the dashboard includes the following sections:

------
#### [ Evaluation configuration metrics ]

Provides metrics for the overall evaluation configuration. An evaluator defines how to assess a specific aspect of an AI agent's performance. To view more details about an evaluator, choose its name in the **Evaluator** column. To view a bar chart and analyze trends for an evaluator, choose the value in the **Count** column.

![\[Evaluation configuration metrics\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/evals_01.png)


------
#### [ Session evaluations ]

Provides evaluation results for evaluators at the session level. A session represents a logical grouping of related interactions from a single user or workflow. A session can contain one or more traces. You can choose a session to filter down to the list of traces within that session in the **Trace evaluations** section.

![\[Session evaluations\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/evals_02.png)


------
#### [ Trace evaluations ]

Provides evaluation results for evaluators at the trace level. A trace is a complete record of a single agent execution or request. A trace can contain one or more spans. Choose a trace to view the trace details along with all the evaluators that were run on that trace.

![\[Trace evaluations\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/evals_03.png)


------
#### [ Span evaluations ]

Provides evaluation results for evaluators at the span level. A span represents the individual operations performed during that execution. Choose a span to view the span details along with all the operations performed during that span.

![\[Span evaluations\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/evals_04.png)


------

## Evaluations graphs
<a name="session-traces-evaluations-graphs"></a>

The Evaluations dashboard also includes a bar graph for each evaluator. The graphs show the trends for each evaluator over time, and enable you to set alarms for specific metric values. To set an alarm, click a bar in the graph, and then choose **Alarm** (bell) icon. For more information, see [Using Amazon CloudWatch alarms](CloudWatch_Alarms.md).

![\[Evaluations graphs\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/evals_graphs.png)


## Work with evaluation results
<a name="session-traces-evaluations-raw-results"></a>

If you need direct access to your evaluation results data, or if you want to create custom visualizations or work outside the AgentCore Evaluations console, you can access your evaluation results directly through CloudWatch Logs, CloudWatch Metrics, and CloudWatch dashboards.

**Topics**
+ [

### Accessing evaluation results in CloudWatch Logs
](#accessing-evaluation-results-logs)
+ [

### Accessing evaluation metrics in CloudWatch Metrics
](#accessing-evaluation-metrics)
+ [

### Creating Custom Dashboards
](#creating-custom-dashboards)
+ [

### Setting alarms on evaluation metrics
](#setting-alarms-evaluation-metrics)
+ [

### Additional Resources
](#additional-resources)

### Accessing evaluation results in CloudWatch Logs
<a name="accessing-evaluation-results-logs"></a>

Your evaluation results are automatically published to CloudWatch Logs in Embedded Metric Format (EMF).

**To find your evaluation results log group**

1. Open the CloudWatch console.

1. In the navigation pane, choose **Logs Management** > **Log groups**.

1. Search for or navigate to the log groups with prefix: `/aws/bedrock-agentcore/evaluations/`.

1. Within this log group, the log events contain the evaluation results.

For more information about working with log groups and querying log data, see [Working with Log Groups and Log Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) and [Analyzing Log Data with CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).

### Accessing evaluation metrics in CloudWatch Metrics
<a name="accessing-evaluation-metrics"></a>

Evaluation results metrics are automatically extracted from the Embedded Metric Format (EMF) logs and published to CloudWatch Metrics.

**To find your evaluation metrics**

1. Open the CloudWatch console.

1. In the navigation pane, choose **Metrics** > **All metrics**.

1. Select the **Bedrock AgentCore/Evaluations** namespace.

1. Browse available metrics by dimensions.

For more information about viewing and working with metrics, see [ Using CloudWatch Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) and [Graphing Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/graph_metrics.html).

### Creating Custom Dashboards
<a name="creating-custom-dashboards"></a>

You can create custom dashboards to visualize your evaluation metrics alongside other operational metrics.

**To create a dashboard with evaluation metrics**

1. In the CloudWatch console, choose **Dashboards** from the navigation pane.

1. Choose **Create dashboard**.

1. Add widgets and select metrics from the **Bedrock AgentCore/Evaluations** namespace.

1. Customize the time range, statistic, and visualization type for your needs.

For detailed instructions, see [ Creating and Working with Custom Dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create_dashboard.html) and [ Using CloudWatch Dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html).

### Setting alarms on evaluation metrics
<a name="setting-alarms-evaluation-metrics"></a>

You can set alarms to notify you when evaluation metrics cross specified thresholds that you have specified, such as when correctness drops below acceptable levels.

**To create an alarm on evaluation metrics**

1. In the CloudWatch console, choose **Alarms** > **All alarms**.

1. Choose **Create alarm**.

1. Choose **Select metric** and navigate to the **Bedrock AgentCore/Evaluations** namespace.

1. Select the metric you want to monitor.

1. Configure the threshold conditions (dynamic anomaly detection threshold available where you don't need to specified a static number threshold) and notification actions.

For detailed instructions, see [Using CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Alarms.html) and [Creating a CloudWatch Alarm Based on a Static Threshold](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html).

### Additional Resources
<a name="additional-resources"></a>
+ [CloudWatch Embedded Metric Format](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatch-Logs-Monitoring-CloudWatch-Metrics.html)
+ [CloudWatch Logs Insights Query Syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html)
+ [Creating Composite Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html)

# Session view
<a name="session-view"></a>

The **Sessions** view shows the list of all the sessions associated with all agents in your account. Choose **Filters** or sort by columns to find a specific session. Choose a session under **Session ID** to view the session details.

![\[Session view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/GenAI_sessions.png)


You can view the Session summary metrics and the list of traces belonging to that session. Session metrics include:
+ Traces – Number of traces belonging to the sessions
+ Server errors – Count of system errors during request processing. High levels of server-side errors can indicate potential infrastructure or service issues that require investigation
+ Client errors – Client errors are errors resulting from invalid requests. High levels of client-side errors can indicate issues with request formatting or permissions
+ Throttles – Number of requests throttle relevant to this session due to exceeding allowed TPS (Transactions Per Second)
+ Sessions details – Meta data about the session such as start time, session ID

**Note**  
Summary page fields are consistent across **Agent view**, **Sessions view**, and **Traces view**. For more information on summary fields, see [Agent view](agent-view.md).

Under **Traces** for a session, choose **Filter traces** to find the trace you want to review. After you choose a trace, view the trace details in the right-pane. You can view the trace summary, spans, and trace content for the selected trace.

# Traces view
<a name="traces-view"></a>

Trace view lists all traces from your agents in this account. To work with traces:

1. Choose **Filter traces** to search for specific traces.

1. Sort by column name to organize results.

1. Under **Actions**, select **Logs Insights** to refine your search by querying across your log and span data or select **Export selected traces** to export.

**Note**  
Summary page fields are consistent across **Agent view**, **Sessions view**, and **Traces view**. For more information on summary fields, see [Agent view](agent-view.md).

# Memory
<a name="Memory"></a>

Understand how your agents store, retrieve, and use contextual information to provide personalized experiences. For more information on Amazon Bedrock AgentCore Memory, see [Add memory to your AI agent](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/memory.html). Memory observability includes three key monitoring areas:
+ **Memories** – Monitor memory storage and retrieval patterns
+ **Memory sessions** – Monitor memory usage within individual sessions
+ **Traces view** – Access detailed trace information for memory operations

![\[Memory view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Memory.png)


To understand short-term or long-memory, see [Add memory to your AI agent ](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/memory.html).

Choose **View details** to view the memory metrics in graphs.

![\[Memory metrics view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Memory_metrics.png)


Under **Memories**, you can view all the memories associated with your account. Choose a memory **Name** to view the memory details.

![\[Memory metrics view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Memory_details.png)


On the **Memory details** page, you will see the following tabs:
+ **Overview** – Displays comprehensive memory performance metrics and usage patterns for the selected memory resource
  + **Associated agents** – You can view the agents using the memory. Choose an **Agent/Endpoint** to view the agent overview page.
  + **Memory API invocations** – Total number of API calls made to memory operations including storage, retrieval, and update requests. This metric helps track memory system usage and capacity planning
  + **Extracted memory records** – Count of memory records successfully extracted and processed from agent interactions. This includes contextual information, user preferences, and conversation history that agents store for personalization
  + **Server errors** – Count of system errors during memory operations. High levels indicate potential infrastructure issues with memory storage or retrieval systems that require investigation
  + **Client errors** – Errors resulting from invalid memory requests, malformed data, or permission issues. High client error rates may indicate problems with agent memory integration or data formatting
  + **Throttling** – Number of memory requests throttled due to exceeding allowed transaction limits. Monitor this metric to determine if memory access patterns need optimization or if service quotas require adjustment
  + **Latency** – Response time for memory operations including storage and retrieval requests. Track P50, P90, and P99 latencies to identify performance bottlenecks and optimize memory access patterns
+ **Memory sessions** – You can view the session that contain short-term memory from agent interactions. Under **Memory sessions**, choose **Session ID** to view the session dashboard.
+ **Traces** – Displays the traces for agents. Under **Traces**, choose **Trace ID** to view the traces that invoke a specific memory. You can use the traces dashboard to deep dive into the agent memory usage, and final responses.

**Note**  
The **Memory sessions** and **Traces** tab experience and fields are similar across **Built-in tools**, **Gateways**, **Memory**, and **Identity** observability. For more information on the fields, see [Code interpreter tool](Built-in-tools.md#Code-interpreter-tool).

# Built-in tools
<a name="Built-in-tools"></a>

Use CloudWatch to gain visibility into how your agents use built-in tools like `Code interpreter tool` and `Browser use tool` to complete tasks. CloudWatch provides monitoring capabilities for each tool. For more information on Amazon Bedrock AgentCore built-in tools, see [Use Amazon Bedrock AgentCore built-in tools to interact with your applications ](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/built-in-tools.html).

**Topics**
+ [

## Code interpreter tool
](#Code-interpreter-tool)
+ [

## Browser use tool
](#Browser-use-tool)

## Code interpreter tool
<a name="Code-interpreter-tool"></a>

You can use the code interpreter tool for the following:
+ Track code execution success rates, runtime duration, and resource consumption
+ Monitor memory usage and computational resource allocation
+ Analyze code execution patterns and optimization opportunities
+ Observe error rates and debugging information for failed executions
+ Track security sandbox isolation and compliance metrics

![\[Code interpreter tool view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Code_interpreter.png)

+ **Tools** – You can monitor fetch operations and API calls made through the browser tool. Choose a tool under **Name** to view the dashboard.

  Choose **View details** to view the resource details.  
![\[Code interpreter tool details\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Built-in-tools-details.png)
  + **Started sessions** – Total number of code interpreter sessions initiated by agents. Each session represents a sandboxed environment where agents can execute code, analyze data, and generate outputs. Monitor this metric to track code interpreter usage and capacity planning
  + **Connections** – Number of active connections to code interpreter runtime environments. This includes both successful connections and connection attempts, helping track resource utilization and concurrent usage patterns
  + **Connection errors** – Count of failed connections to code interpreter environments due to system issues, resource constraints, or configuration problems. High connection error rates may indicate infrastructure issues requiring investigation
  + **Connection throttles** – Number of connection requests throttled due to exceeding allowed limits or resource constraints. Monitor this metric to identify when code interpreter usage approaches capacity limits and may require scaling
  + **CPU hours billed** – Total computational time consumed by code interpreter sessions, measured in CPU hours. This metric helps track resource costs and optimize code execution efficiency across agent workloads
  + **Memory hours billed** – Total memory consumption by code interpreter sessions over time, measured in memory hours. Use this metric for cost tracking and to identify memory-intensive code execution patterns that may need optimization
+ **Tool sessions** – View all the connected sessions where the tool was used.  
![\[Tool sessions view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Tool-sessions.png)

  Choose a **Session ID** under **Total sessions** to view the session dashboard.
+ **Traces** – View the sample traces for agents with observability enabled.  
![\[Traces view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Traces-view.png)

  Choose a **Trace ID** under **Traces** to view the trace details.

![\[Trace summary view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Trace-summary.png)


## Browser use tool
<a name="Browser-use-tool"></a>

You can use the browser use tool for the following:
+ Track browser navigation patterns, page load times, and interaction success rates
+ Observe browser session duration and resource utilization
+ Analyze fetch errors and troubleshoot browser automation issues
+ Observe browser session duration and resource utilization
+ Track security sandbox performance and isolation effectiveness

**Note**  
The **Tools**, **Tool sessions**, and **Traces** tab experience and fields are similar to **Code interpreter tool**. For more information on the fields, see [Code interpreter tool](#Code-interpreter-tool).

# Gateways
<a name="Gateways"></a>

Monitor how your agents discover and interact with external tools and services through AgentCore Gateway. For more information on Amazon Bedrock AgentCore Gateway, see [Amazon Bedrock AgentCore Gateway](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/gateway.html). Gateway observability includes comprehensive monitoring across multiple areas:
+ Track API transformation success rates and response times for external service calls
+ Monitor tool discovery patterns and usage frequency across different agents
+ Analyze authentication and authorization flows for third-party service access
+ Observe data transformation accuracy when converting between different API formats
+ Track error rates and retry patterns for external service integrations

![\[Gateways view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Gateways.png)


Expand the **View details** section to view the gateway metrics in graphs.

![\[Gateways metrics view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Gateway_metrics.png)


Under **Gateways**, choose a gateway **Name** to view the dashboard. You can also sort the list of gateways by click the column headers in the table.

![\[Gateways details view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Gateways_tile.png)


# Gateway details - Overview
<a name="gateways-overview"></a>

The **Overview** tab provides insights derived from sampled spans after transaction search is enabled.

The **Gateway metrics** section lists all of the agents associated with the selected gateway, and it provides information about the number of sessions, traces and errors for each associated agent.

![\[Gateway metrics\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/overview_metrics.png)


Additionally, the **Overview** tab includes the following interactive charts.

------
#### [ Invocations ]

The **Invocations** chart shows the total number of API requests made. Each API call counts as one invocation, regardless of the response status.

------
#### [ System and client errors ]

The **System and client errors** chart provides information about the number of API requests that failed with system erros (`5xx` error codes) and user errors (`4xx` status codes, excluding `429`), and the overall error rate as a percentage.

------
#### [ Throttles ]

The **Throttles** chart provides information about the number API requests that were throttled (`429` status code) by the service, and the overall throttle rate as a percentage.

------
#### [ Invocation latency ]

The **Invocation and latency** chart provides information about the average latency for list and call tools. It also provides information about average latency between when the service received the request and when it begins sending the first response token.

------
#### [ Policy decisions over time ]

The **Policy decisions over time** chart provides information about the number of decisions that resulted in `allow` and `deny` authorization actions. To view decisions for a specific policy, select the policy in the drop-down.

![\[Policy decisions over time chart\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/05_policydecisions.png)


------
#### [ Policy decisions: Per policy distribution ]

The **Policy decisions: Per policy distribution** chart lists all of the policy engines associated with the selected gateway, and shows the policy, number of allows and denies, and enforcement mode for each policy engine. You can choose a policy engine or policy in the list to view more details about it in the Amazon Bedrock AgentCore console.

![\[Policy decisions: Per policy distribution\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/06_policydistribution.png)


------

# Gateway details - Traces
<a name="gateways-traces"></a>

The **Traces** tab displays all of the sampled traces for the selected gateway. In the **Traces** section, choose a **Trace ID** to view to view metrics for the trace and all of the spans within it.

![\[Gateways traces view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/gateway_traces.png)


# Identity
<a name="Identity"></a>

Track identity and access management operations to ensure secure and compliant agent behavior. For more information on Amazon Bedrock Identity, see [Create agent and tool identities with AgentCore Identity ](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/identity.html). Identity observability includes monitoring for different authentication methods:

![\[Identity metrics view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Identity.png)

+ **Identities** – Access detailed trace information for identity operations
+ **Traces** – Apply advanced filters to analyze specific trace patterns

Under **Identities**, you will see the following:
+ **Outbound Auth** – Total number of outbound authentication requests initiated by Amazon Bedrock AgentCore to external identity providers 
+ **OAuth token fetches** – Number of OAuth access tokens successfully retrieved from configured OAuth providers for agent authentication 
+ **OAuth token fetch error rate** – Percentage of OAuth token retrieval attempts that failed due to network issues, invalid credentials, or provider errors 
+ **API key fetches** – Number of API keys retrieved from configured key management systems for authenticating agent requests
+ **API key fetch error rate** – Percentage of API key retrieval attempts that failed due to access issues, invalid keys, or system errors

Choose **View details** to see the Identity metrics in graphs.

![\[Identity details view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Identity_details.png)


Under **Outbound Auths**, choose a outbound auth **Name** to view the dashboard.

![\[Identity auth view\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Identity_auth_view.png)


On the **OAuth client details** page, you will see the following tabs:
+ **Overview** – Displays comprehensive outbound authentication usage metrics and patterns for OAuth clients
  + **Token fetches** – Total number of authentication token requests made by agents, including both machine-to-machine and on-behalf-of-user authentication flows. This metric tracks overall authentication activity and helps with capacity planning for identity services
  + **Token fetch error rate** – Percentage of failed token requests out of total authentication attempts. Monitor this metric to identify authentication issues, expired credentials, or permission problems. Trends over time help detect degrading authentication performance
+ **Traces** – Displays detailed trace information for identity and authentication operations, including OAuth flows, workload identity token requests, and third-party service integrations. Use traces to troubleshoot authentication failures, analyze token fetch latency, and monitor security compliance across agent interactions  
![\[Identity trace summary\]](http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/images/Entity_trace_summary.png)