

# Observability
<a name="observability"></a>

Observability in Amazon Bedrock helps you track performance, manage resources, and automate deployments.

**Topics**
+ [Monitoring the performance of Amazon Bedrock](monitoring.md)
+ [Tagging Amazon Bedrock resources](tagging.md)

# Monitoring the performance of Amazon Bedrock
<a name="monitoring"></a>

You can monitor all parts of your Amazon Bedrock application using Amazon CloudWatch, which collects raw data and processes it into readable, near real-time metrics. You can graph the metrics using the CloudWatch console. You can also set alarms that watch for certain thresholds, and send notifications or take actions when values exceed those thresholds.

For more information, see [What is Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatch.html) in the *Amazon CloudWatch User Guide*.

Amazon Bedrock provides comprehensive monitoring capabilities across different components of your application:
+ [Monitor model invocation using CloudWatch Logs and Amazon S3](model-invocation-logging.md) - Track and analyze model invocations using CloudWatch Logs and Amazon S3.
+ [Monitor knowledge bases using CloudWatch Logs](knowledge-bases-logging.md) - Monitor knowledge base operations and performance.
+ [Monitor Amazon Bedrock Guardrails using CloudWatch metrics](monitoring-guardrails-cw-metrics.md) - Track guardrail evaluations and policy enforcement.
+ [Monitor Amazon Bedrock Agents using CloudWatch Metrics](monitoring-agents-cw-metrics.md) - Monitor agent invocations and performance metrics.
+ [Amazon Bedrock runtime metrics](#runtime-cloudwatch-metrics) - View key runtime metrics including invocations, latency, errors, and token counts.
+ [Monitor Amazon Bedrock job state changes using Amazon EventBridgeMonitor event changes](monitoring-eventbridge.md) - Track job state changes and automate responses to events.
+ [Monitor Amazon Bedrock API calls using CloudTrail](logging-using-cloudtrail.md) - Audit API calls and track user activity.

**Topics**
+ [Monitor model invocation using CloudWatch Logs and Amazon S3](model-invocation-logging.md)
+ [Monitor knowledge bases using CloudWatch Logs](knowledge-bases-logging.md)
+ [Monitor Amazon Bedrock Guardrails using CloudWatch metrics](monitoring-guardrails-cw-metrics.md)
+ [Monitor Amazon Bedrock Agents using CloudWatch Metrics](monitoring-agents-cw-metrics.md)
+ [Amazon Bedrock runtime metrics](#runtime-cloudwatch-metrics)
+ [CloudWatch metrics for Amazon Bedrock](#br-cloudwatch-metrics)
+ [Monitor Amazon Bedrock job state changes using Amazon EventBridge](monitoring-eventbridge.md)
+ [Monitor Amazon Bedrock API calls using CloudTrail](logging-using-cloudtrail.md)

# Monitor model invocation using CloudWatch Logs and Amazon S3
<a name="model-invocation-logging"></a>

You can use model invocation logging to collect invocation logs, model input data, and model output data for all invocations in your AWS account used in Amazon Bedrock in a Region.

With invocation logging, you can collect the full request data, response data, and metadata associated with all calls performed in your account in a Region. Logging can be configured to provide the destination resources where the log data will be published. Supported destinations include Amazon CloudWatch Logs and Amazon Simple Storage Service (Amazon S3). Only destinations from the same account and Region are supported.

Model invocation logging is disabled by default. After model invocation logging is enabled, logs are stored until the logging configuration is deleted.

The following operations can log model invocations.
+ [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html)
+ [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html)
+ [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html)
+ [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)

When [using the Converse API](conversation-inference-call.md), any image or document data that you pass is logged in Amazon S3 (if you have [enabled](#model-invocation-logging-console) delivery and image logging in Amazon S3).

Before you can enable invocation logging, you need to set up an Amazon S3 or CloudWatch Logs destination. You can enable invocation logging through either the console or the API.

**Topics**
+ [Set up an Amazon S3 destination](#setup-s3-destination)
+ [Set up an CloudWatch Logs destination](#setup-cloudwatch-logs-destination)
+ [Model invocation logging using the console](#model-invocation-logging-console)
+ [Model invocation logging using the API](#using-apis-logging)

## Set up an Amazon S3 destination
<a name="setup-s3-destination"></a>

**Note**  
When using Amazon S3 as a logging destination, the bucket needs to be created in the same AWS Region as the one where you're creating the model invocation logging configuration.

You can set up an S3 destination for logging in Amazon Bedrock with these steps:

1. Create an S3 bucket where the logs will be delivered.

1. Add a bucket policy to it like the one below (Replace values for *accountId*, *region*, *bucketName*, and optionally *prefix*):
**Note**  
A bucket policy is automatically attached to the bucket on your behalf when you configure logging with the permissions `S3:GetBucketPolicy` and `S3:PutBucketPolicy`.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AmazonBedrockLogsWrite",
               "Effect": "Allow",
               "Principal": {
                   "Service": "bedrock.amazonaws.com"
               },
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": [
                   "arn:aws:s3:::bucketName/prefix/AWSLogs/123456789012/BedrockModelInvocationLogs/*"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:SourceAccount": "123456789012"
                   },
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:bedrock:us-east-1:123456789012:*"
                   }
               }
           }
       ]
   }
   ```

------

1. (Optional) If configuring SSE-KMS on the bucket, add the below policy on the KMS key:

   ```
   {
       "Effect": "Allow",
       "Principal": {
           "Service": "bedrock.amazonaws.com"
       },
       "Action": "kms:GenerateDataKey",
       "Resource": "*",
       "Condition": {
           "StringEquals": {
             "aws:SourceAccount": "accountId" 
           },
           "ArnLike": {
              "aws:SourceArn": "arn:aws:bedrock:region:accountId:*"
           }
       }
   }
   ```

For more information on S3 SSE-KMS configurations, see [Specifying KMS Encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-kms-encryption.html).

**Note**  
The bucket ACL must be disabled in order for the bucket policy to take effect. For more information, see [Disabling ACLs for all new buckets and enforcing Object Ownership](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ensure-object-ownership.html).

## Set up an CloudWatch Logs destination
<a name="setup-cloudwatch-logs-destination"></a>

You can set up a Amazon CloudWatch Logs destination for logging in Amazon Bedrock with the following steps:

1. Create a CloudWatch log group where the logs will be published.

1. Create an IAM role with the following permissions for CloudWatch Logs.

   **Trusted entity**:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "bedrock.amazonaws.com"
               },
               "Action": "sts:AssumeRole",
               "Condition": {
                   "StringEquals": {
                       "aws:SourceAccount": "123456789012"
                   },
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:bedrock:us-east-1:123456789012:*"
                   }
               }
           }
       ]
   }
   ```

------

   **Role policy**:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:CreateLogStream",
                   "logs:PutLogEvents"
               ],
               "Resource": "arn:aws:logs:us-east-1:123456789012:log-group:logGroupName:log-stream:aws/bedrock/modelinvocations"
           }
       ]
   }
   ```

------

For more information on setting up SSE for CloudWatch Logs, see [Encrypt log data in CloudWatch Logs using AWS Key Management Service](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html).

## Model invocation logging using the console
<a name="model-invocation-logging-console"></a>

**To enable model invocation logging**

Sign in to the AWS Management Console with an IAM identity that has permissions to use the Amazon Bedrock console. Then, open the Amazon Bedrock console at [https://console.aws.amazon.com/bedrock](https://console.aws.amazon.com/bedrock).

1. From the left navigation pane, select **Settings**.

1. In the **Model invocation logging** page, select **Model invocation logging**. Additional configuration settings for logging will appear.

1. Select the modalities of the data requests and responses that you want to publish to the logs. You can select any combination of the following output options:
   + Text
   + Image
   + Embedding
   + Video
**Note**  
Data will be logged for *all* models that support the modalities (whether as input or output) that you choose. For example, if you select **Image**, model invocation will be logged for all models that support image input, image output, or both.

1. Select where to publish the logs:
   + Amazon S3 only
   + CloudWatch Logs only
   + Both Amazon S3 and CloudWatch Logs 

**Logging destinations**  
Amazon S3 and CloudWatch Logs destinations are supported for invocation logs, and small input and output data. For large input and output data or binary image outputs, only Amazon S3 is supported. The following details summarize how the data will be represented in the target location.
+ **S3 destination** — Gzipped JSON files, each containing a batch of invocation log records, are delivered to the specified S3 bucket. Similar to a CloudWatch Logs event, each record will contain the invocation metadata, and input and output JSON bodies of up to 100 KB in size. Binary data or JSON bodies larger than 100 KB will be uploaded as individual objects in the specified Amazon S3 bucket under the data prefix. The data can be queried using Amazon S3 Select and Amazon Athena, and can be catalogued for ETL using AWS Glue. The data can be loaded into OpenSearch service, or be processed by any Amazon EventBridge targets. 
+ **CloudWatch Logs destination** — JSON invocation log events are delivered to a specified log group in CloudWatch Logs. The log event contains the invocation metadata, and input and output JSON bodies of up to 100 KB in size. If an Amazon S3 location for large data delivery is provided, binary data or JSON bodies larger than 100 KB will be uploaded to the Amazon S3 bucket under the data prefix instead. data can be queried using CloudWatch Logs Insights, and can be further streamed to various services in real-time using CloudWatch Logs.

## Model invocation logging using the API
<a name="using-apis-logging"></a>

Model invocation logging can be configured using the following APIs:
+ [PutModelInvocationLoggingConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_PutModelInvocationLoggingConfiguration.html)
+ [GetModelInvocationLoggingConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelInvocationLoggingConfiguration.html)
+ [DeleteModelInvocationLoggingConfiguration](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_DeleteModelInvocationLoggingConfiguration.html)

# Monitor knowledge bases using CloudWatch Logs
<a name="knowledge-bases-logging"></a>

Amazon Bedrock supports a monitoring system to help you understand the execution of any data ingestion jobs for your knowledge bases. The following sections cover how to enable and configure the logging system for Amazon Bedrock knowledge bases using both the AWS Management Console and CloudWatch API. You can gain visibility into the data ingestion of your knowledge base resources with this logging system.

## Knowledge bases logging using the console
<a name="knowledge-bases-logging-console"></a>

To enable logging for an Amazon Bedrock knowledge base using the AWS Management Console:

1. **Create a knowledge base**: Use the AWS Management Console for Amazon Bedrock to [create a new knowledge base](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-create.html).

1. **Add a log delivery option**: After creating the knowledge base, edit or update your knowledge base to add a log delivery option.
**Note**  
Log deliveries are not supported when creating a knowledge base with a structured data store, or for a Kendra GenAI Index.

   **Configure log delivery details**: Enter the details for the log delivery, including:
   + Logging destination (either CloudWatch Logs, Amazon S3, Amazon Data Firehose)
   + (If using CloudWatch Logs as the logging destination) Log group name
   + (If using Amazon S3 as the logging destination) Bucket name
   + (If using Amazon Data Firehose as the logging destination) Firehose stream

1. **Include access permissions**: The user who is signed into the console must have the necessary permissions to write the collected logs to the chosen destination.

   The following example IAM policy can be attached to the user signed into the console to grant the necessary permissions when using CloudWatch Logs

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "logs:CreateDelivery",
               "Resource": [
                   "arn:aws:logs:us-east-1:123456789012:delivery-source:*",
                   "arn:aws:logs:us-east-1:123456789012:delivery:*",
                   "arn:aws:logs:us-east-1:123456789012:delivery-destination:*"
               ]
           }
       ]
   }
   ```

------

1. **Confirm delivery status**: Verify that the log delivery status is "Delivery active" in the console.

## Knowledge bases logging using the CloudWatch API
<a name="knowledge-bases-logging-cloudwatch-api"></a>

To enable logging for an Amazon Bedrock knowledge base using the CloudWatch API:

1. **Get the ARN of your knowledge base**: After [creating a knowledge base](https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-create.html) using either the Amazon Bedrock API or the Amazon Bedrock console, get the Amazon Resource Name of the knowledge base. You can get the Amazon Resource Name by calling [GetKnowledgeBase](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_GetKnowledgeBase.html) API. A knowledge base Amazon Resource Name follows this format: *arn:aws:bedrock:your-region:your-account-id:knowledge-base/knowledge-base-id*

1. **Call `PutDeliverySource`**: Use the [PutDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) API provided by Amazon CloudWatch to create a delivery source for the knowledge base. Pass the knowledge base Amazon Resource Name as the `resourceArn`. `logType` specifies `APPLICATION_LOGS` as the type of logs that are collected. `APPLICATION_LOGS` track the current status of files during an ingestion job.

   ```
   {
       "logType": "APPLICATION_LOGS",
       "name": "my-knowledge-base-delivery-source",
       "resourceArn": "arn:aws:bedrock:your-region:your-account-id:knowledge-base/knowledge_base_id"
   }
   ```

1. **Call `PutDeliveryDestination`**: Use the [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) API provided by Amazon CloudWatch to configure where the logs will be stored. You can choose either CloudWatch Logs, Amazon S3, or Amazon Data Firehose as the destination for storing logs. You must specify the Amazon Resource Name of one of the destination options for where your logs will be stored. You can choose the `outputFormat` of the logs to be one of the following: `json`, `plain`, `w3c`, `raw`, `parquet`. The following is an example of configuring logs to be stored in an Amazon S3 bucket and in JSON format.

   ```
   {
      "deliveryDestinationConfiguration": { 
         "destinationResourceArn": "arn:aws:s3:::bucket-name"
      },
      "name": "string",
      "outputFormat": "json",
      "tags": { 
         "key" : "value" 
      }
   }
   ```

   Note that if you are delivering logs cross-account, you must use the `PutDeliveryDestinationPolicy` API to assign an AWS Identity and Access Management (IAM) policy to the destination account. The IAM policy allows delivery from one account to another account.

1. **Call `CreateDelivery`**: Use the [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) API call to link the delivery source to the destination that you created in the previous steps. This API operation associates the delivery source with the end destination.

   ```
   {
      "deliveryDestinationArn": "string",
      "deliverySourceName": "string",
      "tags": { 
         "string" : "string" 
      }
   }
   ```

**Note**  
If you want to use CloudFormation, you can use the following:  
[Delivery](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-delivery.html)
[DeliveryDestination](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverydestination.html)
[DeliverySource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverysource.html)
The `ResourceArn` is the `KnowledgeBaseARN`, and `LogType` must be `APPLICATION_LOGS` as the supported log type.

## Supported log types
<a name="knowledge-bases-logging-log-types"></a>

Amazon Bedrock knowledge bases support the following log types:
+ `APPLICATION_LOGS`: Logs that track the current status of a specific file during a data ingestion job.

## User permissions and limits
<a name="knowledge-bases-logging-permissions-other-requirements"></a>

To enable logging for an Amazon Bedrock knowledge base, the following permissions are required for the user account signed into the console:

1. `bedrock:AllowVendedLogDeliveryForResource` – Required to allow logs to be delivered for the knowledge base resource.

   You can view an example IAM role/permissions policy with all the required permissions for your specific logging destination. See [Vended logs permissions for different delivery destinations](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-vended-logs-permissions-V2), and follow the IAM role/permission policy example for your logging destination, including allowing updates to your specific logging destination resource (whether CloudWatch Logs, Amazon S3, or Amazon Data Firehose).

You can also check if there are any quota limits for making CloudWatch logs delivery-related API calls in the [CloudWatch Logs service quotas documentation](https://docs.aws.amazon.com/general/latest/gr/cwl_region.html). Quota limits set a maximum number of times you can call an API or create a resource. If you exceed a limit, it will result in a `ServiceQuotaExceededException` error.

## Examples of knowledge base logs
<a name="knowledge-bases-logging-example-logs"></a>

There are data ingestion level logs and resource level logs for Amazon Bedrock knowledge bases.

The following is an example of a data ingestion job log.

```
{
    "event_timestamp": 1718683433639,
    "event": {
        "ingestion_job_id": "<IngestionJobId>",
        "data_source_id": "<IngestionJobId>",
        "ingestion_job_status": "INGESTION_JOB_STARTED" | "STOPPED" | "COMPLETE" | "FAILED" | "CRAWLING_COMPLETED"
        "knowledge_base_arn": "arn:aws:bedrock:<region>:<accountId>:knowledge-base/<KnowledgeBaseId>",
        "resource_statistics": {
            "number_of_resources_updated": int,
            "number_of_resources_ingested": int,
            "number_of_resources_scheduled_for_update": int,
            "number_of_resources_scheduled_for_ingestion": int,
            "number_of_resources_scheduled_for_metadata_update": int,
            "number_of_resources_deleted": int,
            "number_of_resources_with_metadata_updated": int,
            "number_of_resources_failed": int,
            "number_of_resources_scheduled_for_deletion": int
        }
    },
    "event_version": "1.0",
    "event_type": "StartIngestionJob.StatusChanged",
    "level": "INFO"
}
```

The following is an example of a resource level log.

```
{
    "event_timestamp": 1718677342332,
    "event": {
        "ingestion_job_id": "<IngestionJobId>",
        "data_source_id": "<IngestionJobId>",
        "knowledge_base_arn": "arn:aws:bedrock:<region>:<accountId>:knowledge-base/<KnowledgeBaseId>",
        "document_location": {
            "type": "S3",
            "s3_location": {
                "uri": "s3:/<BucketName>/<ObjectKey>"
            }
        },
        "status": "<ResourceStatus>"
        "status_reasons": String[],
        "chunk_statistics": {
            "ignored": int,
            "created": int,
            "deleted": int,
            "metadata_updated": int,
            "failed_to_create": int,
            "failed_to_delete": int,
            "failed_to_update_metadata": int  
        },
    },
    "event_version": "1.0",
    "event_type": "StartIngestionJob.ResourceStatusChanged",
    "level": "INFO" | "WARN" | "ERROR"
}
```

The `status` for the resource can be one of the following:
+ `SCHEDULED_FOR_INGESTION`, `SCHEDULED_FOR_DELETION`, `SCHEDULED_FOR_UPDATE`, `SCHEDULED_FOR_METADATA_UPDATE`: These status values indicate that the resource is scheduled for processing after calculating the difference between the current state of the knowledge base and the changes made in the data source.
+ `RESOURCE_IGNORED`: This status value indicates that the resource was ignored for processing, and the reason is detailed inside `status_reasons` property.
+ `EMBEDDING_STARTED` and `EMBEDDING_COMPLETED`: These status values indicate when the vector embedding for a resource started and completed.
+ `INDEXING_STARTED` and `INDEXING_COMPLETED`: These status values indicate when the indexing for a resource started and completed.
+ `DELETION_STARTED` and `DELETION_COMPLETED`: These status values indicate when the deletion for a resource started and completed.
+ `METADATA_UPDATE_STARTED` and `METADATA_UPDATE_COMPLETED`: These status values indicate when the metadata update for a resource started and completed.
+ `EMBEDDING_FAILED`, `INDEXING_FAILED`, `DELETION_FAILED`, and `METADATA_UPDATE_FAILED`: These status values indicate that the processing of a resource failed, and the reasons are detailed inside `status_reasons` property.
+ `INDEXED`, `DELETED`, `PARTIALLY_INDEXED`, `METADATA_PARTIALLY_INDEXED`, `FAILED`: Once the processing of a document is finalized, a log is published with the final status of the document, and the summary of the processing inside `chunk_statistics` property.

## Examples of common queries to debug knowledge base logs
<a name="knowledge-bases-logging-example-queries"></a>

You can interact with logs using queries. For example, you can query for all documents with the event status `RESOURCE_IGNORED` during ingestion of documents or data.

The following are some common queries that can be used to debug the logs generated using CloudWatch Logs Insights:
+ Query for all the logs generated for a specific S3 document.

  `filter event.document_location.s3_location.uri = "s3://<bucketName>/<objectKey>"`
+ Query for all documents ignored during the data ingestion job.

  `filter event.status = "RESOURCE_IGNORED"`
+ Query for all the exceptions that occurred while vector embedding documents.

  `filter event.status = "EMBEDDING_FAILED"`
+ Query for all the exceptions that occurred while indexing documents into the vector database.

  `filter event.status = "INDEXING_FAILED"`
+ Query for all the exceptions that occurred while deleting documents from the vector database.

  `filter event.status = "DELETION_FAILED"`
+ Query for all the exceptions that occurred while updating the metadata of your document in the vector database.

  `filter event.status = "DELETION_FAILED"`
+ Query for all the exceptions that occurred during the execution of a data ingestion job.

  `filter level = "ERROR" or level = "WARN"`

# Monitor Amazon Bedrock Guardrails using CloudWatch metrics
<a name="monitoring-guardrails-cw-metrics"></a>

The following table describes runtime metrics provided by Amazon Bedrock Guardrails that you can monitor with Amazon CloudWatch metrics.

**Runtime metrics**


| Metric name | Unit | Description | 
| --- | --- | --- | 
| Invocations | SampleCount | Number of requests to the ApplyGuardrail API operation | 
| InvocationLatency | MilliSeconds | Latency of the invocations | 
| InvocationClientErrors | SampleCount | Number of invocations that result in client-side errors | 
| InvocationServerErrors | SampleCount | Number of invocations that result in AWS server-side errors | 
| InvocationThrottles | SampleCount | Number of invocations that the system throttled. Throttled requests don't count as invocations or errors | 
| TextUnitCount | SampleCount | Number of text units consumed by the guardrails policies | 
| InvocationsIntervened | SampleCount | Number of invocations where the guardrails intervened | 
| FindingCounts | SampleCount | Counts for each type of finding from InvokeAutomatedReasoningCheck | 
| TotalFindings | SampleCount | Counts number of findings produced for each InvokeAutomatedReasoningCheck request | 
| Invocations | SampleCount | Number of requests to InvokeAutomatedReasoningCheck | 
| Latency | MilliSeconds | Latency of verification using automated reasoning policy | 

You can view guardrail dimensions in the CloudWatch console based on the table below:

**Dimension**


| Dimension name | Dimension values | Available for the following metrics | 
| --- | --- | --- | 
| Operation | ApplyGuardrail |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 
| GuardrailContentSource |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 
| GuardrailPolicyType |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 
| GuardrailArn, GuardrailVersion |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 
| FindingType \$1 PolicyArn \$1 PolicyVersion | FindingType \$1 PolicyArn \$1 PolicyVersion |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 
| FindingType \$1 GuardrailArn \$1 GuardrailVersion | FindingType \$1 GuardrailArn \$1 GuardrailVersion |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 
| PolicyArn \$1 PolicyVersion | PolicyArn \$1 PolicyVersion |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 
| GuardrailArn \$1 GuardrailVersion | GuardrailArn \$1 GuardrailVersion |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-guardrails-cw-metrics.html)  | 

**Get CloudWatch metrics for guardrails**

You can get metrics for guardrails with the AWS Management Console, the AWS CLI, or the CloudWatch API. You can use the CloudWatch API through one of the AWS Software Development Kits (SDKs) or the CloudWatch API tools. 

The namespace for guardrail metrics in CloudWatch is `AWS/Bedrock/Guardrails`.

**Note**  
You must have the appropriate CloudWatch permissions to monitor guardrails with CloudWatch. For more information, see [Authentication and Access Control for CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/auth-and-access-control-cw.html) in the CloudWatch User Guide. 

**View guardrails metrics in the CloudWatch console**

1. Sign in to the AWS Management Console and open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.

1. Choose the `AWS/Bedrock/Guardrails` namespace.

# Monitor Amazon Bedrock Agents using CloudWatch Metrics
<a name="monitoring-agents-cw-metrics"></a>

The following table describes runtime metrics provided by Amazon Bedrock Agents that you can monitor with Amazon CloudWatch Metrics.

**Runtime metrics**


****  

| Metric name | Unit | Description | 
| --- | --- | --- | 
| InvocationCount | SampleCount | Number of requests to the API operation | 
| TotalTime | Milliseconds | The time it took for the server to process the request | 
| TTFT | Milliseconds | Time-to-first-token metric. Emitted when Streaming configuration is enabled for an invokeAgent or invokeInlineAgent request | 
| InvocationThrottles | SampleCount | Number of invocations that the system throttled. Throttled requests and other invocation errors don't count as either Invocations or Errors. | 
| InvocationServerErrors | SampleCount | Number of invocations that result in AWS server-side errors | 
| InvocationClientErrors | SampleCount | Number of invocations that result in client-side errors | 
| ModelLatency | Milliseconds | The latency of the model | 
| ModelInvocationCount | SampleCount | Number of requests that the agent made to the model | 
| ModelInvocationThrottles | SampleCount | Number of model invocations that the Amazon Bedrock core throttled. Throttled requests and other invocation errors don't count as either Invocations or Errors. | 
| ModelInvocationClientErrors | SampleCount | Number of model invocations that result in client-side errors | 
| ModelInvocationServerErrors | SampleCount | Number of model invocations that result in AWS server-side errors | 
| InputTokenCount | SampleCount | Number of tokens input to the model. | 
| outputTokenCount | SampleCount | Number of tokens ouptut from the model. | 

You can view agent dimensions in the CloudWatch console based on the table below:

**Dimension**


****  

| Dimension name | Dimension values | Available for the following metrics | 
| --- | --- | --- | 
| Operation | [InvokeAgent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html), [InvokeInlineAgent](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeInlineAgent.html) |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-agents-cw-metrics.html)  | 
| Operation, ModelId | Any Amazon Bedrock agent operation listed in the Operation dimension and the  modelId of any Amazon Bedrock core model |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-agents-cw-metrics.html)  | 
| Operation, AgentAliasArn, ModelId | Any Amazon Bedrock agent operation listed in the Operation dimension and any modelId of an Amazon Bedrock model, grouped by the agentAliasArn of an agent alias  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/monitoring-agents-cw-metrics.html)  | 

**Use CloudWatch metrics for agents**

You can get metrics for agents with the AWS Management Console, the AWS CLI, or the CloudWatch API. You can use the CloudWatch API through one of the AWS Software Development Kits (SDKs) or the CloudWatch API tools. 

The namespace for agent metrics in CloudWatch is `AWS/Bedrock/Agents`.

You must have the appropriate CloudWatch permissions to monitor agents with CloudWatch. For more information, see [Authentication and Access Control for CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/auth-and-access-control-cw.html) in the CloudWatch User Guide. 

**Important**  
If you don’t want CloudWatch to use collected data for CloudWatch service improvement, you can create an opt out policy. For more information, [AI services opt-out policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_ai-opt-out.html).

If you aren't seeing metrics published in the CloudWatch dashboard, make sure the IAM service role that you used to [create](agents-create.md) the agent has the following policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": {
        "Effect": "Allow",
        "Resource": "*",
        "Action": "cloudwatch:PutMetricData",
        "Condition": {
            "StringEquals": {
                "cloudwatch:namespace": "AWS/Bedrock/Agents"
            }
        }
    }
}
```

------

## Amazon Bedrock runtime metrics
<a name="runtime-cloudwatch-metrics"></a>

The following table describes runtime metrics provided by Amazon Bedrock.


| Metric name | Unit | Description | 
| --- | --- | --- | 
| Invocations | SampleCount | Number of successful requests to the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html), [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html), [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html), and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) API operations. | 
|  InvocationLatency  | MilliSeconds |  The time from when a request is sent to when the last token is received.    | 
|  InvocationClientErrors  | SampleCount |  Number of invocations that result in client-side errors.  | 
|  InvocationServerErrors  | SampleCount |  Number of invocations that result in AWS server-side errors.  | 
|  InvocationThrottles  | SampleCount |  Number of invocations that the system throttled. Throttled requests and other invocation errors don't count as either Invocations or Errors. The number of throttles you see will depend on your retry settings in the SDK. For more information, see [Retry behavior](https://docs.aws.amazon.com/sdkref/latest/guide/feature-retry-behavior.html) in the AWS SDKs and Tools Reference Guide.   | 
|  InputTokenCount  | SampleCount |  Number of tokens in the input.  | 
| LegacyModelInvocations | SampleCount | Number of invocations using [Legacy](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_FoundationModelLifecycle.html) models  | 
|  OutputTokenCount  | SampleCount |  Number of tokens in the output.  | 
|  OutputImageCount  | SampleCount |  Number of images in the output (only applicable for image generation models).  | 
|  TimeToFirstToken  | MilliSeconds |  Time from when a request is sent to when the first token is received, for the [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html) and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) streaming API operations.  | 
|  EstimatedTPMQuotaUsage  | SampleCount |  Estimated Tokens Per Minute (TPM) quota consumption across the [Converse](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html), [ConverseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html), [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html), and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) API operations.  | 

There are also metrics for [Amazon Bedrock Guardrails](monitoring-guardrails-cw-metrics.md) and [Amazon Bedrock Agents](monitoring-agents-cw-metrics.md).

## CloudWatch metrics for Amazon Bedrock
<a name="br-cloudwatch-metrics"></a>

For each delivery success or failure attempt, the following Amazon CloudWatch metrics are emitted under the namespace `AWS/Bedrock`, and `Across all model IDs` dimension:
+ `ModelInvocationLogsCloudWatchDeliverySuccess`
+ `ModelInvocationLogsCloudWatchDeliveryFailure`
+ `ModelInvocationLogsS3DeliverySuccess`
+ `ModelInvocationLogsS3DeliveryFailure`
+ `ModelInvocationLargeDataS3DeliverySuccess`
+ `ModelInvocationLargeDataS3DeliveryFailure`

To retrieve metrics for your Amazon Bedrock operations, you specify the following information:
+ The metric dimension. A *dimension* is a set of name-value pairs that you use to identify a metric. Amazon Bedrock supports the following dimensions:
  + `ModelId` – all metrics
  + `ModelId + ImageSize + BucketedStepSize` – OutputImageCount
+ The metric name, such as `InvocationClientErrors`. 

You can get metrics for Amazon Bedrock with the AWS Management Console, the AWS CLI, or the CloudWatch API. You can use the CloudWatch API through one of the AWS Software Development Kits (SDKs) or the CloudWatch API tools.

To view Amazon Bedrock metrics in the CloudWatch console, go to the metrics section in the navigation pane and select the all metrics option, then search for the model ID.

You must have the appropriate CloudWatch permissions to monitor Amazon Bedrock with CloudWatch For more information, see [ Authentication and Access Control for Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/auth-and-access-control-cw.html) in the *Amazon CloudWatch User Guide*.

# Monitor Amazon Bedrock job state changes using Amazon EventBridge
<a name="monitoring-eventbridge"></a>

Amazon EventBridge is an AWS service that monitors events from other AWS services in near real-time. You can use Amazon EventBridge to monitor events in Amazon Bedrock and to send event information when they match a rule you define. You can then configure your application to respond automatically to these events. Amazon EventBridge supports monitoring of the following events in Amazon Bedrock:
+ [Model customization jobs](custom-models.md) – The state of a job can be seen in the job details in the AWS Management Console or in a [GetModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelCustomizationJob.html) response. For more information, see [Monitor your model customization job](model-customization-monitor.md).
+ [Batch inference jobs](batch-inference.md) – The state of a job can be seen in the job details in the AWS Management Console or in a [GetModelInvocationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelInvocationJob.html) response. For more information, see [Monitor batch inference jobs](batch-inference-monitor.md).

Amazon Bedrock emits events on a best-effort basis. Events from Amazon Bedrock are delivered to Amazon EventBridge in near real time. You can create rules that trigger programmatic actions in response to an event. With Amazon EventBridge, you can do the following:
+ Publish notifications whenever there is a state change event in a job that you've submitted, and whether to add new asynchronous workflows in the future. The notification should give you enough information to react to events in downstream workflows.
+ Deliver job status updates without invoking a Get API, which can help handle API rate limit issues, API updates, and reduction in additional compute resources.

There is no cost to receive AWS events from Amazon EventBridge. For more information about, Amazon EventBridge, see [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html)

**Topics**
+ [How EventBridge for Amazon Bedrock works](monitoring-eventbridge-how-it-works.md)
+ [[Example] Create a rule to handle Amazon Bedrock state change events](monitoring-eventbridge-create-rule-ex.md)

# How EventBridge for Amazon Bedrock works
<a name="monitoring-eventbridge-how-it-works"></a>

Amazon EventBridge is a serverless event bus that ingests state change events from AWS services, SaaS partners, and customer applications. It processes events based on rules or patterns that you create, and routes these events to one or more *targets* that you choose, such as AWS Lambda, Amazon Simple Queue Service, and Amazon Simple Notification Service. You can configure downstream workflows based on the contents of the event.

Before learning how to use Amazon EventBridge for Amazon Bedrock, review the following pages in the Amazon EventBridge User Guide.
+ [Event bus concepts in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is-how-it-works-concepts.html) – Review the concepts of *events*, *rules*, and *targets*.
+ [Creating rules that react to events in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html) – Learn how to create rules.
+ [Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html) – Learn how to define event patterns.
+ [Amazon EventBridge targets](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html) – Learn about the targets you can send events to.

Amazon Bedrock publishes your events via Amazon EventBridge whenever there is a change in the state of a job that you submit. In each case, a new event is created and sent to Amazon EventBridge, which then sends the event to your default event bus. The event shows which job’s state has changed and the current state of the job.

Amazon Bedrock events are identified in an event by the value of the `source` being `aws.bedrock`. The `detail-type` for events in Amazon Bedrock include the following:
+ `Model Customization Job State Change`
+ `Batch Inference Job State Change`

Select a tab to see a sample event for a job submitted in Amazon Bedrock.

------
#### [ Model Customization Job State Change ]

The following JSON object shows a sample event for when the status of a model customization job has changed:

```
{
  "version": "0",
  "id": "UUID",
  "detail-type": "Model Customization Job State Change",
  "source": "aws.bedrock",
  "account": "123456789012",
  "time": "2023-08-11T12:34:56Z",
  "region": "us-east-1",
  "resources": ["arn:aws:bedrock:us-east-1:123456789012:model-customization-job/abcdefghwxyz"],
  "detail": {
    "version": "0.0",
    "jobName": "abcd-wxyz",
    "jobArn": "arn:aws:bedrock:us-east-1:123456789012:model-customization-job/abcdefghwxyz",
    "outputModelName": "dummy-output-model-name",
    "outputModelArn": "arn:aws:bedrock:us-east-1:123456789012:dummy-output-model-name",
    "roleArn": "arn:aws:iam::123456789012:role/JobExecutionRole",
    "jobStatus": "Failed",
    "failureMessage": "Failure Message here.",
    "creationTime": "2023-08-11T10:11:12Z",
    "lastModifiedTime": "2023-08-11T12:34:56Z",
    "endTime": "2023-08-11T12:34:56Z",
    "baseModelArn": "arn:aws:bedrock:us-east-1:123456789012:base-model-name",
    "hyperParameters": {
      "batchSize": "1",
      "epochCount": "5",
      "learningRate": "0.05",
      "learningRateWarmupSteps": "10"
    },
    "trainingDataConfig": {
      "s3Uri": "s3://bucket/key"
    },
    "validationDataConfig": {
      "s3Uri": "s3://bucket/key"
    },
    "outputDataConfig": {
      "s3Uri": "s3://bucket/key"
    }
  }
}
```

To learn about the fields in the **detail** object that are specific to model customization, see [GetModelCustomizationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelCustomizationJob.html).

------
#### [ Batch Inference Job State Change ]

The following JSON object shows a sample event for when the status of a model customization job has changed:

```
{
  "version": "0",
  "id": "a1b2c3d4",
  "detail-type": "Batch Inference Job State Change",
  "source": "aws.bedrock",
  "account": "123456789012",
  "time": "Wed Aug 28 22:58:30 UTC 2024",
  "region": "us-east-1",
  "resources": ["arn:aws:bedrock:us-east-1:123456789012:model-invocation-job/abcdefghwxyz"],
  "detail": {
    "version": "0.0",
    "accountId": "123456789012",
    "batchJobName": "dummy-batch-job-name",
    "batchJobArn": "arn:aws:bedrock:us-east-1:123456789012:model-invocation-job/abcdefghwxyz",
    "batchModelId": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0",
    "status": "Completed",
    "failureMessage": "",
    "creationTime": "Aug 28, 2024, 10:47:53 PM"
  }
}
```

To learn about the fields in the **detail** object that are specific to batch inference, see [GetModelInvocationJob](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetModelInvocationJob.html).

------
#### [ Bedrock Data Automation sample event ]

The following JSON object shows a sample event for when the status of a BDA processing job has changed.

```
{
    "version": "0",
    "id": "0cc3eaf7-dff6-6f67-0ee0-ae572fccfe84",
    "detail-type": "Bedrock Data Automation Job Succeeded",
    "source": "aws.bedrock",
    "account": "123456789012",
    "time": "2025-05-27T22:48:36Z",
    "region": "us-west-2",
    "resources": [],
    "detail": {
        "job_id": "25010344-03f7-4167-803a-837afdc7ce98",
        "job_status": "SUCCESS",
        "semantic_modality": "Document",
        "input_s3_object": {
            "s3_bucket": "input-s3-bucket-name",
            "name": "key/name"
        },
        "output_s3_location": {
            "s3_bucket": "output-s3-bucket-name",
            "name": "key"
        },
        "error_message": ""
    }
}
```

------

# [Example] Create a rule to handle Amazon Bedrock state change events
<a name="monitoring-eventbridge-create-rule-ex"></a>

The example in this topic demonstrates how to set up notification of Amazon Bedrock state change events by guiding you through configuring an Amazon Simple Notification Service topic, subscribing to the topic, and creating a rule in Amazon EventBridge to notify you of an Amazon Bedrock state change through the topic. Carry out the following procedure:

1. To create a topic, follow the steps at [Creating an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html) in the Amazon Simple Notification Service Developer Guide.

1. To subscribe to the topic that you created, follow the steps at [Creating a subscription to an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html) in the Amazon Simple Notification Service Developer Guide or send a [Subscribe](https://docs.aws.amazon.com/sns/latest/api/API_Subscribe.html) request with an [Amazon SNS endpoint](https://docs.aws.amazon.com/general/latest/gr/sns.html) and specify the Amazon Resource Name (ARN) of the topic you created.

1. To create a rule to notify you when the state of a job in Amazon Bedrock has changed, follow the steps at [Creating rules that react to events in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html), while considering the following specific actions for this example:
   + Choose to define the rule detail with an event pattern.
   + When you build the event pattern, you can do the following:
     + View a sample event in the **Sample event** section by selecting any of the Amazon Bedrock **Sample events** to understand the fields from an Amazon Bedrock event that you can use when defining the pattern. You can also see the sample events in [How EventBridge for Amazon Bedrock works](monitoring-eventbridge-how-it-works.md).
     + Get started by selecting **Use pattern from** in the **Creation method** section and then choosing Amazon Bedrock as the **AWS service** and the **Event type** that you want to capture. To learn how to define an event pattern, see [Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html).
   + As an example, you can use the following event pattern to capture when a batch inference job has completed:

     ```
     {
      "source": ["aws.bedrock"],
      "detail-type": ["Batch Inference Job State Change"],
      "detail": {
       "status": ["Completed"]
      }
     }
     ```
   + Select **SNS topic** as the target and choose the topic that you created.

1. After creating the rule, you will be notified through Amazon SNS when a batch inference job has completed.

# Monitor Amazon Bedrock API calls using CloudTrail
<a name="logging-using-cloudtrail"></a>

Amazon Bedrock is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon Bedrock. CloudTrail captures all API calls for Amazon Bedrock as events. The calls captured include calls from the Amazon Bedrock console and code calls to the Amazon Bedrock API operations. If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Amazon Bedrock.

If you don't configure a trail, you can still view the most recent events in the CloudTrail console in **Event history**.

Using the information collected by CloudTrail, you can determine the request that was made to Amazon Bedrock, the IP address from which the request was made, who made the request, when it was made, and additional details.

To learn more about CloudTrail, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html).

## Amazon Bedrock information in CloudTrail
<a name="service-name-info-in-cloudtrail"></a>

CloudTrail is enabled on your AWS account when you create the account. When activity occurs in Amazon Bedrock, that activity is recorded in a CloudTrail event along with other AWS service events in **Event history**. You can view, search, and download recent events in your AWS account. For more information, see [Viewing events with CloudTrail Event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html).

For an ongoing record of events in your AWS account, including events for Amazon Bedrock, create a trail. A *trail* enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following:
+ [Overview for creating a trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [CloudTrail supported services and integrations](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html)
+ [Configuring Amazon SNS notifications for CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/configure-sns-notifications-for-cloudtrail.html)
+ [Receiving CloudTrail log files from multiple Regions](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html) and [Receiving CloudTrail log files from multiple accounts](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html)

Every event or log entry contains information about who generated the request. The identity information helps you determine the following:
+ Whether the request was made with root or AWS Identity and Access Management (IAM) user credentials.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another AWS service.

For more information, see the [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html).

## Amazon Bedrock data events in CloudTrail
<a name="service-name-data-events-cloudtrail"></a>

[Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) provide information about the resource operations performed on or in a resource (for example, reading or writing to an Amazon S3 object). These are also known as data plane operations. Data events are often high-volume activities that CloudTrail doesn’t log by default.

Amazon Bedrock logs some [Amazon Bedrock Runtime API operations](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Amazon_Bedrock_Runtime.html) (such as `InvokeModel`, `InvokeModelWithResponseStream`, `Converse`, `ConverseStream`, and `ListAsyncInvokes`) as [management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html#logging-management-events).

Amazon Bedrock logs other [Amazon Bedrock Runtime API operations](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Amazon_Bedrock_Runtime.html) (such as `InvokeModelWithBidirectionalStream`, `GetAsyncInvoke`, and `StartAsyncInvokes`) as data events.

Amazon Bedrock logs all [Agents for Amazon Bedrock Runtime API operations](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Agents_for_Amazon_Bedrock_Runtime.html) (such as `InvokeAgent` and `InvokeInlineAgent`) actions to CloudTrail as data events.
+ To log [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeAgent.html) calls, configure advanced event selectors to record data events for the `AWS::Bedrock::AgentAlias` resource type.
+ To log [https://docs.aws.amazon.com//bedrock/latest/APIReference/API_agent-runtime_InvokeInlineAgent.html](https://docs.aws.amazon.com//bedrock/latest/APIReference/API_agent-runtime_InvokeInlineAgent.html) calls, configure advanced event selectors to record data events for the `AWS::Bedrock::InlineAgent` resource type.
+ To log [InvokeModelWithBidirectionalStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithBidirectionalStream.html) calls, configure advanced event selectors to record data events for the `AWS::Bedrock::Model` resource type and `AWS:Bedrock::AsyncInvoke`.
+ To log [GetAsyncInvoke](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GetAsyncInvoke.html) and [StartAsyncInvoke](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_StartAsyncInvoke.html) calls, configure advanced event selectors to record data events for the `AWS::Bedrock::Model` resource type and `AWS:Bedrock::AsyncInvoke`.
+ To log [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_Retrieve.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_Retrieve.html) and [https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_RetrieveAndGenerate.html](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_RetrieveAndGenerate.html) calls, configure advanced event selectors to record data events for the `AWS::Bedrock::KnowledgeBase` resource type.
+ To log [InvokeFlow](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent-runtime_InvokeFlow.html) calls, configure advanced event selectors to record data events for the `AWS::Bedrock::FlowAlias` resource type.
+ To log `RenderPrompt` calls, configure advanced event selectors to record data events for the `AWS::Bedrock::Prompt` resource type. `RenderPrompt` is a permission-only [action](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonbedrock.html#amazonbedrock-actions-as-permissions) that renders prompts, created using [Prompt management](prompt-management.md), for model invocation (`InvokeModel(WithResponseStream)` and `Converse(Stream)`).

From the CloudTrail console, choose **Bedrock agent alias** or **Bedrock knowledge base** for the **Data event type**. You can additionally filter on the `eventName` and `resources.ARN` fields by choosing a custom log selector template. For more information, see [Logging data events with the AWS Management Console](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html).

From the AWS CLI, set the `resource.type` value equal to `AWS::Bedrock::AgentAlias`, `AWS::Bedrock::KnowledgeBase`, or `AWS::Bedrock::FlowAlias` and set the `eventCategory` equal to `Data`. For more information, see [Logging data events with the AWS CLI](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#creating-data-event-selectors-with-the-AWS-CLI).

The following example shows how to configure a trail to log all Amazon Bedrock data events for all Amazon Bedrock resource types in the AWS CLI.

```
aws cloudtrail put-event-selectors --trail-name trailName \
--advanced-event-selectors \
'[
  {
    "Name": "Log all data events on an alias of an agent in Amazon Bedrock.",
    "FieldSelectors": [
      { "Field": "eventCategory", "Equals": ["Data"] },
      { "Field": "resources.type", "Equals": ["AWS::Bedrock::AgentAlias"] }
    ]
  },
  {
    "Name": "Log all data events on a knowledge base in Amazon Bedrock.",
    "FieldSelectors": [
      { "Field": "eventCategory", "Equals": ["Data"] },
      { "Field": "resources.type", "Equals": ["AWS::Bedrock::KnowledgeBase"] }
    ]
  },
  {
    "Name": "Log all data events on a flow in Amazon Bedrock.",
    "FieldSelectors": [
      { "Field": "eventCategory", "Equals": ["Data"] },
      { "Field": "resources.type", "Equals": ["AWS::Bedrock::FlowAlias"] }
    ]
  }
  {
    "Name": "Log all data events on a guardrail in Amazon Bedrock.",
    "FieldSelectors": [
      { "Field": "eventCategory", "Equals": ["Data"] },
      { "Field": "resources.type", "Equals": ["AWS::Bedrock::Guardrail"] }
    ]
  }
]'
```

You can additionally filter on the `eventName` and `resources.ARN` fields. For more information about these fields, see [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html).

Additional charges apply for data events. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

## Amazon Bedrock management events in CloudTrail
<a name="bedrock-management-events-cloudtrail"></a>

[Management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html#logging-management-events) provide information about management operations that are performed on resources in your AWS account. These are also known as control plane operations. CloudTrail logs management event API operations by default.

Amazon Bedrock logs [Amazon Bedrock Runtime API operations](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Amazon_Bedrock_Runtime.html) (`InvokeModel`, `InvokeModelWithResponseStream`, `Converse`, and `ConverseStream`) as [management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html#logging-management-events).

Amazon Bedrock logs the remainder of Amazon Bedrock API operations as management events. For a list of the Amazon Bedrock API operations that Amazon Bedrock logs to CloudTrail, see the following pages in the Amazon Bedrock API reference.
+ [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Amazon_Bedrock.html). 
+ [Amazon Bedrock Agents](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Agents_for_Amazon_Bedrock.html). 
+ [Amazon Bedrock Agents Runtime](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Agents_for_Amazon_Bedrock_Runtime.html). 
+ [Amazon Bedrock Runtime](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Amazon_Bedrock_Runtime.html).

All [Amazon Bedrock API operations](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Amazon_Bedrock.html) and [Agents for Amazon Bedrock API operations](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Agents_for_Amazon_Bedrock.html) are logged by CloudTrail and documented in the [Amazon Bedrock API Reference](https://docs.aws.amazon.com/bedrock/latest/APIReference/). For example, calls to the `InvokeModel`, `StopModelCustomizationJob`, and `CreateAgent` actions generate entries in the CloudTrail log files.

[Amazon GuardDuty](https://aws.amazon.com/guardduty/) continuously monitors and analyzes your CloudTrail management and event logs to detect potential security issues. When you enable Amazon GuardDuty for an AWS account, it automatically starts analyzing CloudTrail logs to detect suspicious activity in Amazon Bedrock APIs, such as a user logging in from a new location and using Amazon Bedrock APIs to remove Amazon Bedrock Guardrails, or change the Amazon S3 bucket set for model training data.

## Understanding Amazon Bedrock log file entries
<a name="understanding-bedrock-entries"></a>

A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. An event represents a single request from any source and includes information about the requested action, the date and time of the action, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear in any specific order. 

The following example shows a CloudTrail log entry that demonstrates the `InvokeModel` action.

```
{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "AROAICFHPEXAMPLE",
        "arn": "arn:aws:iam::111122223333:user/userxyz",
        "accountId": "111122223333",
        "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "userName": "userxyz"
    },
    "eventTime": "2023-10-11T21:58:59Z",
    "eventSource": "bedrock.amazonaws.com",
    "eventName": "InvokeModel",
    "awsRegion": "us-west-2",
    "sourceIPAddress": "192.0.2.0",
    "userAgent": "Boto3/1.28.62 md/Botocore#1.31.62 ua/2.0 os/macos#22.6.0 md/arch#arm64 lang/python#3.9.6 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.31.62",
    "requestParameters": {
        "modelId": "stability.stable-diffusion-xl-v0"
    },
    "responseElements": null,
    "requestID": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "eventID": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 ",
    "readOnly": false,
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "111122223333",
    "eventCategory": "Management",
    "tlsDetails": {
        "tlsVersion": "TLSv1.2",
        "cipherSuite": "cipher suite",
        "clientProvidedHostHeader": "bedrock-runtime.us-west-2.amazonaws.com"
    }
}
```

# Tagging Amazon Bedrock resources
<a name="tagging"></a>

To help you manage your Amazon Bedrock resources, you can assign metadata to each resource as tags. A tag is a label that you assign to an AWS resource. Each tag consists of a key and a value.

Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or application. For best practices and restrictions on tagging, see [Tagging your AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html).

Tags help you to do the following:
+ Identify and organize your AWS resources. Many AWS resources support tagging, so you can assign the same tag to resources in different services to indicate that the resources are the same.
+ Allocate costs. You activate tags on the AWS Billing and Cost Management dashboard. AWS uses the tags to categorize your costs and deliver a monthly cost allocation report to you. For more information, see [Use cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) in the *AWS Billing and Cost Management User Guide*.
+ Control access to your resources. You can use tags with Amazon Bedrock to create policies to control access to Amazon Bedrock resources. These policies can be attached to an IAM role or user to enable tag-based access control.

**Topics**
+ [Use the console](#tagging-console)
+ [Use the API](#tagging-api)

## Use the console
<a name="tagging-console"></a>

You can add, modify, and remove tags at any time while creating or editing a supported resource.

## Use the API
<a name="tagging-api"></a>

To carry out tagging operations, you need the Amazon Resource Name (ARN) of the resource on which you want to carry out a tagging operation. There are two sets of tagging operations, depending on the resource for which you are adding or managing tags.

The following table summarizes the different use cases and the tagging operations to use for them:


****  

| Use case | Resource created with [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Amazon_Bedrock.html) API operation | Resource created with [Amazon Bedrock Agents](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_Operations_Agents_for_Amazon_Bedrock.html) API operation | Resource created with Amazon Bedrock Data Automation API | 
| --- | --- | --- | --- | 
| Tag a resource |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/tagging.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/tagging.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/tagging.html)  | 
| Untag a resource | Make an [UntagResource](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_UntagResource.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). | Make an [UntagResource](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_UntagResource.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). | Make an UntagResource request with an Amazon Bedrock Data Automation Build time Endpoint. | 
| List tags for a resource | Make a [ListTagsForResource](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListTagsForResource.html) request with an [Amazon Bedrock control plane endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#br-cp). | Make a [ListTagsForResource](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_agent_ListTagsForResource.html) request with an [Agents for Amazon Bedrock build-time endpoint](https://docs.aws.amazon.com/general/latest/gr/bedrock.html#bra-bt). | Make a ListTagsForResource request with an Amazon Bedrock Data Automation Build time Endpoint. | 

**Note**  
When viewing these operations in CloudTrail, you can identify the specific resource being tagged by checking the request parameters in the event details.

Choose a tab to see code examples in an interface or language.

------
#### [ AWS CLI ]

Add two tags to an agent. Separate key/value pairs with a space.

```
aws bedrock-agent tag-resource \
    --resource-arn "arn:aws:bedrock:us-east-1:123456789012:agent/AGENT12345" \
    --tags key=department,value=billing key=facing,value=internal
```

Remove the tags from the agent. Separate keys with a space.

```
aws bedrock-agent untag-resource \
    --resource-arn "arn:aws:bedrock:us-east-1:123456789012:agent/AGENT12345" \
    --tag-keys key=department facing
```

List the tags for the agent.

```
aws bedrock-agent list-tags-for-resource \
    --resource-arn "arn:aws:bedrock:us-east-1:123456789012:agent/AGENT12345"
```

------
#### [ Python (Boto) ]

Add two tags to an agent.

```
import boto3

bedrock = boto3.client(service_name='bedrock-agent')

tags = [
    {
        'key': 'department',
        'value': 'billing'
    },
    {
        'key': 'facing',
        'value': 'internal'
    }
]

bedrock.tag_resource(resourceArn='arn:aws:bedrock:us-east-1:123456789012:agent/AGENT12345', tags=tags)
```

Remove the tags from the agent.

```
bedrock.untag_resource(
    resourceArn='arn:aws:bedrock:us-east-1:123456789012:agent/AGENT12345', 
    tagKeys=['department', 'facing']
)
```

List the tags for the agent.

```
bedrock.list_tags_for_resource(resourceArn='arn:aws:bedrock:us-east-1:123456789012:agent/AGENT12345')
```

------