Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Example: Use Application Signals to troubleshoot generative AI applications interacting with Amazon Bedrock models

Focus mode
Example: Use Application Signals to troubleshoot generative AI applications interacting with Amazon Bedrock models - Amazon CloudWatch

You can use Application Signals to troubleshoot your generative AI applications that interact with Amazon Bedrock models. Application Signals streamlines this process by providing out-of-the-box telemetry data, offering deeper insights into your application's interactions with LLM models. It helps address key use cases such as:

  • Model configuration issues

  • Model usage costs

  • Model latency

  • Model response generation stopped reasons

Enabling Application Signals with LLM/GenAI Observability provides real-time visibility into your application's interactions with Amazon Bedrock services. Application Signals automatically generates and correlates performance metrics and traces for Amazon Bedrock API calls.

Application Signals currently support the following LLM Models from Amazon Bedrock.

  • AI21 Jamba

  • Amazon Titan

  • Anthropic Claude

  • Cohere Command

  • Meta Llama

  • Mistral AI

  • Nova

Fine-grained metrics and traces

For each Amazon Bedrock API call, Application Signals generates detailed performance metrics at the resource level, including:

  • Model ID

  • Guardrails ID

  • Knowledge Base ID

  • Bedrock Agent ID

Additionally, correlated trace spans at the same level help provide a comprehensive view of request execution and dependencies.

Performance metrics using Application Signals.

OpenTelemetry GenAI attributes support

Application Signals generates the following GenAI attributes for Amazon Bedrock API calls with OpenTelemetry semantic convention. These attributes help analyze model usage, cost, and response quality, and can be leveraged through Transaction Search for deeper insights.

  • gen_ai.system

  • gen_ai.request.model

  • gen_ai.request.max_tokens

  • gen_ai.request.temperature

  • gen_ai.request.top_p

  • gen_ai.usage.input_tokens

  • gen_ai.usage.output_tokens

  • gen_ai.response.finish_reasons

GenAI attributes using Application Signals.

For example, your can leverage the analytic capability from Transaction Search to compare the token usage and cost across different LLM models for the same prompt, enabling cost-efficient model selection.

GenAI attributes using Application Signals.

For more information, see Improve Amazon Bedrock Observability with CloudWatch Application Signals.

PrivacySite termsCookie preferences
© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.