

# Monitoring events, logs, and streams in an Amazon RDS DB instance
<a name="CHAP_Monitor_Logs_Events"></a>

When you monitor your Amazon RDS databases and your other AWS solutions, your goal is to maintain the following:
+ Reliability
+ Availability
+ Performance
+ Security

[Monitoring metrics in an Amazon RDS instance](CHAP_Monitoring.md) explains how to monitor your instance  using metrics. A complete solution must also monitor database events, log files, and activity streams. AWS provides you with the following monitoring tools:
+ *Amazon EventBridge* is a serverless event bus service that makes it easy to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, Software-as-a-Service (SaaS) applications, and AWS services. EventBridge routes that data to targets such as AWS Lambda. This way, you can monitor events that happen in services and build event-driven architectures. For more information, see the [Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/).
+ *Amazon CloudWatch Logs* provides a way to monitor, store, and access your log files from Amazon RDS instances, AWS CloudTrail, and other sources. Amazon CloudWatch Logs can monitor information in the log files and notify you when certain thresholds are met. You can also archive your log data in highly durable storage. For more information, see the [Amazon CloudWatch Logs User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/).
+ *AWS CloudTrail* captures API calls and related events made by or on behalf of your AWS account. CloudTrail delivers the log files to an Amazon S3 bucket that you specify. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred. For more information, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).
+ *Database Activity Streams* is an Amazon RDS  feature that provides a near real-time stream of the activity in your DB instance. Amazon RDS pushes activities to an Amazon Kinesis data stream. The Kinesis stream is created automatically. From Kinesis, you can configure AWS services such as Amazon Data Firehose and AWS Lambda to consume the stream and store the data.

**Topics**
+ [Viewing logs, events, and streams in the Amazon RDS console](logs-events-streams-console.md)
+ [Monitoring Amazon RDS events](working-with-events.md)
+ [Monitoring Amazon RDS log files](USER_LogAccess.md)
+ [Monitoring Amazon RDS API calls in AWS CloudTrail](logging-using-cloudtrail.md)
+ [Monitoring Amazon RDS with Database Activity Streams](DBActivityStreams.md)
+ [Monitoring threats with Amazon GuardDuty RDS Protection](guard-duty-rds-protection.md)

# Viewing logs, events, and streams in the Amazon RDS console
<a name="logs-events-streams-console"></a>

Amazon RDS integrates with AWS services to show information about logs, events, and database activity streams in the RDS console.

The **Logs & events** tab for your RDS DB instance shows the following information:
+ **Amazon CloudWatch alarms** – Shows any metric alarms that you have configured for the DB instance. If you haven't configured alarms, you can create them in the RDS console. For more information, see [Monitoring Amazon RDS metrics with Amazon CloudWatch](monitoring-cloudwatch.md).
+ **Recent events** – Shows a summary of events (environment changes) for your RDS DB instance . For more information, see [Viewing Amazon RDS events](USER_ListEvents.md).
+ **Logs** – Shows database log files generated by a DB instance. For more information, see [Monitoring Amazon RDS log files](USER_LogAccess.md).

The **Configuration** tab displays information about database activity streams.

**To view logs, events, and streams for your DB instance in the RDS console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that you want to monitor.

   The database page appears. The following example shows an Oracle database named `orclb`.  
![\[Database page with monitoring tab shown\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/oracle-with-monitoring-tab.png)

1. Choose **Logs & events**.

   The Logs & events section appears.  
![\[Database page with Logs & events tab shown\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/oracle-logs-and-events-subpage.png)

1. Choose **Configuration**.

   The following example shows the status of the database activity streams for your DB instance.  
![\[Enhanced Monitoring\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/oracle-das.png)

# Monitoring Amazon RDS events
<a name="working-with-events"></a>

An *event* indicates a change in an environment. This can be an AWS environment, an SaaS partner service or application, or a custom application or service. For descriptions of the RDS events, see [Amazon RDS event categories and event messages](USER_Events.Messages.md).

**Topics**
+ [Overview of events for Amazon RDS](#rds-cloudwatch-events.sample)
+ [Viewing Amazon RDS events](USER_ListEvents.md)
+ [Working with Amazon RDS event notification](USER_Events.md)
+ [Creating a rule that triggers on an Amazon RDS event](rds-cloud-watch-events.md)
+ [Amazon RDS event categories and event messages](USER_Events.Messages.md)

## Overview of events for Amazon RDS
<a name="rds-cloudwatch-events.sample"></a>

An *RDS event* indicates a change in the Amazon RDS environment. For example, Amazon RDS generates an event when the state of a DB instance changes from pending to running. Amazon RDS delivers events to EventBridge in near-real time.

**Note**  
Amazon RDS emits events on a best effort basis. We recommend that you avoid writing programs that depend on the order or existence of notification events, because they might be out of sequence or missing.

Amazon RDS records events that relate to the following resources:
+ DB instances

  For a list of DB instance events, see [DB instance events](USER_Events.Messages.md#USER_Events.Messages.instance).
+ DB parameter groups

  For a list of DB parameter group events, see [DB parameter group events](USER_Events.Messages.md#USER_Events.Messages.parameter-group).
+ DB security groups

  For a list of DB security group events, see [DB security group events](USER_Events.Messages.md#USER_Events.Messages.security-group).
+ DB snapshots

  For a list of DB snapshot events, see [DB snapshot events](USER_Events.Messages.md#USER_Events.Messages.snapshot).
+ RDS Proxy events

  For a list of RDS Proxy events, see [RDS Proxy events](USER_Events.Messages.md#USER_Events.Messages.rds-proxy).
+ Blue/green deployment events

  For a list of blue/green deployment events, see [Blue/green deployment events](USER_Events.Messages.md#USER_Events.Messages.BlueGreenDeployments).

This information includes the following: 
+ The date and time of the event
+ The source name and source type of the event
+ A message associated with the event
+ Event notifications include tags from when the message was sent and may not reflect tags at the time when the event occurred

# Viewing Amazon RDS events
<a name="USER_ListEvents"></a>

You can retrieve the following event information for your Amazon RDS resources:
+ Resource name
+ Resource type
+ Time of the event
+ Message summary of the event

You can access events in the following parts of the AWS Management Console:
+ The **Events** tab, which shows events from the past 24 hours.
+ The **Recent events** table in the **Logs & events** section in the **Databases** tab, which can show events for up to the past 2 weeks.

You can also retrieve events by using the [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) AWS CLI command, or the [DescribeEvents](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEvents.html) RDS API operation. If you use the AWS CLI or the RDS API to view events, you can retrieve events for up to the past 14 days. 

**Note**  
If you need to store events for longer periods of time, you can send Amazon RDS events to EventBridge. For more information, see [Creating a rule that triggers on an Amazon RDS event](rds-cloud-watch-events.md)

For descriptions of the Amazon RDS events, see [Amazon RDS event categories and event messages](USER_Events.Messages.md).

To access detailed information about events using AWS CloudTrail, including request parameters, see [CloudTrail events](logging-using-cloudtrail.md#service-name-info-in-cloudtrail.events).

## Console
<a name="USER_ListEvents.CON"></a>

**To view all Amazon RDS events for the past 24 hours**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Events**. 

   The available events appear in a list.

1. (Optional) Enter a search term to filter your results. 

   The following example shows a list of events filtered by the characters **stopped**.  
![\[List DB events\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ListEvents.png)

## AWS CLI
<a name="USER_ListEvents.CLI"></a>

To view all events generated in the last hour, call [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) with no parameters.

```
aws rds describe-events
```

The following sample output shows that a DB instance has been stopped.

```
{
    "Events": [
        {
            "EventCategories": [
                "notification"
            ], 
            "SourceType": "db-instance", 
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:testinst", 
            "Date": "2022-04-22T21:31:00.681Z", 
            "Message": "DB instance stopped", 
            "SourceIdentifier": "testinst"
        }
    ]
}
```

To view all Amazon RDS events for the past 10080 minutes (7 days), call the [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) AWS CLI command and set the `--duration` parameter to `10080`.

```
1. aws rds describe-events --duration 10080
```

The following example shows the events in the specified time range for DB instance *test-instance*.

```
aws rds describe-events \
    --source-identifier test-instance \
    --source-type db-instance \
    --start-time 2022-03-13T22:00Z \
    --end-time 2022-03-13T23:59Z
```

The following sample output shows the status of a backup.

```
{
    "Events": [
        {
            "SourceType": "db-instance",
            "SourceIdentifier": "test-instance",
            "EventCategories": [
                "backup"
            ],
            "Message": "Backing up DB instance",
            "Date": "2022-03-13T23:09:23.983Z",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
        },
        {
            "SourceType": "db-instance",
            "SourceIdentifier": "test-instance",
            "EventCategories": [
                "backup"
            ],
            "Message": "Finished DB Instance backup",
            "Date": "2022-03-13T23:15:13.049Z",
            "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance"
        }
    ]
}
```

## API
<a name="USER_ListEvents.API"></a>

You can view all Amazon RDS instance events for the past 14 days by calling the [DescribeEvents](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEvents.html) RDS API operation and setting the `Duration` parameter to `20160`.

# Working with Amazon RDS event notification
<a name="USER_Events"></a>

Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs. These notifications can be in any notification form supported by Amazon SNS for an AWS Region, such as an email, a text message, or a call to an HTTP endpoint. 

**Topics**
+ [Overview of Amazon RDS event notification](USER_Events.overview.md)
+ [Granting permissions to publish notifications to an Amazon SNS topic](USER_Events.GrantingPermissions.md)
+ [Subscribing to Amazon RDS event notification](USER_Events.Subscribing.md)
+ [Amazon RDS event notification tags and attributes](USER_Events.TagsAttributesForFiltering.md)
+ [Listing Amazon RDS event notification subscriptions](USER_Events.ListSubscription.md)
+ [Modifying an Amazon RDS event notification subscription](USER_Events.Modifying.md)
+ [Adding a source identifier to an Amazon RDS event notification subscription](USER_Events.AddingSource.md)
+ [Removing a source identifier from an Amazon RDS event notification subscription](USER_Events.RemovingSource.md)
+ [Listing the Amazon RDS event notification categories](USER_Events.ListingCategories.md)
+ [Deleting an Amazon RDS event notification subscription](USER_Events.Deleting.md)

# Overview of Amazon RDS event notification
<a name="USER_Events.overview"></a>

Amazon RDS groups events into categories that you can subscribe to so that you can be notified when an event in that category occurs.

**Topics**
+ [RDS resources eligible for event subscription](#USER_Events.overview.resources)
+ [Basic process for subscribing to Amazon RDS event notifications](#USER_Events.overview.process)
+ [Delivery of RDS event notifications](#USER_Events.overview.subscriptions)
+ [Billing for Amazon RDS event notifications](#USER_Events.overview.billing)
+ [Examples of Amazon RDS events using Amazon EventBridge](#events-examples)

## RDS resources eligible for event subscription
<a name="USER_Events.overview.resources"></a>

You can subscribe to an event category for the following resources:
+ DB instance
+ DB snapshot
+ DB parameter group
+ DB security group
+ RDS Proxy
+ Custom engine version

For example, if you subscribe to the backup category for a given DB instance, you're notified whenever a backup-related event occurs that affects the DB instance. If you subscribe to a configuration change category for a DB instance, you're notified when the DB instance is changed. You also receive notification when an event notification subscription changes.

You might want to create several different subscriptions. For example, you might create one subscription that receives all event notifications for all DB instances and another subscription that includes only critical events for a subset of the DB instances. For the second subscription, specify one or more DB instances in the filter.

## Basic process for subscribing to Amazon RDS event notifications
<a name="USER_Events.overview.process"></a>

The process for subscribing to Amazon RDS event notification is as follows:

1. You create an Amazon RDS event notification subscription by using the Amazon RDS console, AWS CLI, or API.

   Amazon RDS uses the ARN of an Amazon SNS topic to identify each subscription. The Amazon RDS console creates the ARN for you when you create the subscription. Create the ARN by using the Amazon SNS console, the AWS CLI, or the Amazon SNS API.

1. Amazon RDS sends an approval email or SMS message to the addresses you submitted with your subscription.

1. You confirm your subscription by choosing the link in the notification you received.

1. The Amazon RDS console updates the **My Event Subscriptions** section with the status of your subscription.

1. Amazon RDS begins sending the notifications to the addresses that you provided when you created the subscription.

To learn about identity and access management when using Amazon SNS, see [Identity and access management in Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-authentication-and-access-control.html) in the *Amazon Simple Notification Service Developer Guide*.

You can use AWS Lambda to process event notifications from a DB instance. For more information, see [Using AWS Lambda with Amazon RDS](https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html) in the *AWS Lambda Developer Guide*.

## Delivery of RDS event notifications
<a name="USER_Events.overview.subscriptions"></a>

Amazon RDS sends notifications to the addresses that you provide when you create the subscription. The notification can include message attributes which provide structured metadata about the message. For more information about message attributes, see [Amazon RDS event categories and event messages](USER_Events.Messages.md).

Event notifications might take up to five minutes to be delivered.

**Important**  
Amazon RDS doesn't guarantee the order of events sent in an event stream. The event order is subject to change.

When Amazon SNS sends a notification to a subscribed HTTP or HTTPS endpoint, the POST message sent to the endpoint has a message body that contains a JSON document. For more information, see [Amazon SNS message and JSON formats](https://docs.aws.amazon.com/sns/latest/dg/sns-message-and-json-formats.html) in the *Amazon Simple Notification Service Developer Guide*.

You can configure SNS to notify you with text messages. For more information, see [ Mobile text messaging (SMS)](https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-phone-number-as-subscriber.html) in the *Amazon Simple Notification Service Developer Guide*.

To turn off notifications without deleting a subscription, choose **No** for **Enabled** in the Amazon RDS console. Or you can set the `Enabled` parameter to `false` using the AWS CLI or Amazon RDS API.

## Billing for Amazon RDS event notifications
<a name="USER_Events.overview.billing"></a>

Billing for Amazon RDS event notification is through Amazon SNS. Amazon SNS fees apply when using event notification. For more information about Amazon SNS billing, see [ Amazon Simple Notification Service pricing](http://aws.amazon.com/sns/#pricing).

## Examples of Amazon RDS events using Amazon EventBridge
<a name="events-examples"></a>

The following examples illustrate different types of Amazon RDS events in JSON format. For a tutorial that shows you how to capture and view events in JSON format, see [Tutorial: Log DB instance state changes using Amazon EventBridge](rds-cloud-watch-events.md#log-rds-instance-state).

**Topics**
+ [Example of a DB instance event](#rds-cloudwatch-events.db-instances)
+ [Example of a DB parameter group event](#rds-cloudwatch-events.db-parameter-groups)
+ [Example of a DB snapshot event](#rds-cloudwatch-events.db-snapshots)

### Example of a DB instance event
<a name="rds-cloudwatch-events.db-instances"></a>

The following is an example of a DB instance event in JSON format. The event shows that RDS performed a multi-AZ failover for the instance named `my-db-instance`. The event ID is RDS-EVENT-0049.

```
{
  "version": "0",
  "id": "68f6e973-1a0c-d37b-f2f2-94a7f62ffd4e",
  "detail-type": "RDS DB Instance Event",
  "source": "aws.rds",
  "account": "123456789012",
  "time": "2018-09-27T22:36:43Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:rds:us-east-1:123456789012:db:my-db-instance"
  ],
  "detail": {
    "EventCategories": [
      "failover"
    ],
    "SourceType": "DB_INSTANCE",
    "SourceArn": "arn:aws:rds:us-east-1:123456789012:db:my-db-instance",
    "Date": "2018-09-27T22:36:43.292Z",
    "Message": "A Multi-AZ failover has completed.",
    "SourceIdentifier": "my-db-instance",
    "EventID": "RDS-EVENT-0049"
  }
}
```

### Example of a DB parameter group event
<a name="rds-cloudwatch-events.db-parameter-groups"></a>

The following is an example of a DB parameter group event in JSON format. The event shows that the parameter `time_zone` was updated in parameter group `my-db-param-group`. The event ID is RDS-EVENT-0037.

```
{
  "version": "0",
  "id": "844e2571-85d4-695f-b930-0153b71dcb42",
  "detail-type": "RDS DB Parameter Group Event",
  "source": "aws.rds",
  "account": "123456789012",
  "time": "2018-10-06T12:26:13Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group"
  ],
  "detail": {
    "EventCategories": [
      "configuration change"
    ],
    "SourceType": "DB_PARAM",
    "SourceArn": "arn:aws:rds:us-east-1:123456789012:pg:my-db-param-group",
    "Date": "2018-10-06T12:26:13.882Z",
    "Message": "Updated parameter time_zone to UTC with apply method immediate",
    "SourceIdentifier": "my-db-param-group",
    "EventID": "RDS-EVENT-0037"
  }
}
```

### Example of a DB snapshot event
<a name="rds-cloudwatch-events.db-snapshots"></a>

The following is an example of a DB snapshot event in JSON format. The event shows the deletion of the snapshot named `my-db-snapshot`. The event ID is RDS-EVENT-0041.

```
{
  "version": "0",
  "id": "844e2571-85d4-695f-b930-0153b71dcb42",
  "detail-type": "RDS DB Snapshot Event",
  "source": "aws.rds",
  "account": "123456789012",
  "time": "2018-10-06T12:26:13Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:rds:us-east-1:123456789012:snapshot:rds:my-db-snapshot"
  ],
  "detail": {
    "EventCategories": [
      "deletion"
    ],
    "SourceType": "SNAPSHOT",
    "SourceArn": "arn:aws:rds:us-east-1:123456789012:snapshot:rds:my-db-snapshot",
    "Date": "2018-10-06T12:26:13.882Z",
    "Message": "Deleted manual snapshot",
    "SourceIdentifier": "my-db-snapshot",
    "EventID": "RDS-EVENT-0041"
  }
}
```

# Granting permissions to publish notifications to an Amazon SNS topic
<a name="USER_Events.GrantingPermissions"></a>

To grant Amazon RDS permissions to publish notifications to an Amazon Simple Notification Service (Amazon SNS) topic, attach an AWS Identity and Access Management (IAM) policy to the destination topic. For more information about permissions, see [ Example cases for Amazon Simple Notification Service access control](https://docs.aws.amazon.com/sns/latest/dg/sns-access-policy-use-cases.html) in the *Amazon Simple Notification Service Developer Guide*.

By default, an Amazon SNS topic has a policy allowing all Amazon RDS resources within the same account to publish notifications to it. You can attach a custom policy to allow cross-account notifications, or to restrict access to certain resources.

The following is an example of an IAM policy that you attach to the destination Amazon SNS topic. It restricts the topic to DB instances with names that match the specified prefix. To use this policy, specify the following values:
+ `Resource` – The Amazon Resource Name (ARN) for your Amazon SNS topic
+ `SourceARN` – Your RDS resource ARN
+ `SourceAccount` – Your AWS account ID

To see a list of resource types and their ARNs, see [Resources Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-resources-for-iam-policies) in the *Service Authorization Reference*.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.rds.amazonaws.com"
      },
      "Action": [
        "sns:Publish"
      ],
      "Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
      "Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
        },
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        }
      }
    }
  ]
}
```

------

# Subscribing to Amazon RDS event notification
<a name="USER_Events.Subscribing"></a>

The simplest way to create a subscription is with the RDS console. If you choose to create event notification subscriptions using the CLI or API, you must create an Amazon Simple Notification Service topic and subscribe to that topic with the Amazon SNS console or Amazon SNS API. You will also need to retain the Amazon Resource Name (ARN) of the topic because it is used when submitting CLI commands or API operations. For information on creating an SNS topic and subscribing to it, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/GettingStarted.html) in the *Amazon Simple Notification Service Developer Guide*.

You can specify the type of source you want to be notified of and the Amazon RDS source that triggers the event:

**Source type**  
The type of source. For example, **Source type** might be **Instances**. You must choose a source type.

***Resources* to include**  
The Amazon RDS resources that are generating the events. For example, you might choose **Select specific instances** and then **myDBInstance1**. 

The following table explains the result when you specify or don't specify ***Resources* to include**.


|  Resources to include  |  Description  |  Example  | 
| --- | --- | --- | 
|  Specified  |  RDS notifies you about all events for the specified resource only.  | If your Source type is Instances and your resource is myDBInstance1, RDS notifies you about all events for myDBInstance1 only. | 
|  Not specified  |  RDS notifies you about the events for the specified source type for all your Amazon RDS resources.   |  If your **Source type** is **Instances**, RDS notifies you about all instance-related events in your account.  | 

An Amazon SNS topic subscriber receives every message published to the topic by default. To receive only a subset of the messages, the subscriber must assign a filter policy to the topic subscription. For more information about SNS message filtering, see [Amazon SNS message filtering](https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html) in the *Amazon Simple Notification Service Developer Guide*

## Console
<a name="USER_Events.Subscribing.Console"></a>

**To subscribe to RDS event notification**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In navigation pane, choose **Event subscriptions**. 

1. In the **Event subscriptions** pane, choose **Create event subscription**. 

1. Enter your subscription details as follows:

   1. For **Name**, enter a name for the event notification subscription.

   1. For **Send notifications to**, do one of the following:
      + Choose **New email topic**. Enter a name for your email topic and a list of recipients. We recommend that you configure the events subscriptions to the same email address as your primary account contact. The recommendations, service events, and personal health messages are sent using different channels. The subscriptions to the same email address ensures that all the messages are consolidated in one location.
      + Choose **Amazon Resource Name (ARN)**. Then choose existing Amazon SNS ARN for an Amazon SNS topic.

        If you want to use a topic that has been enabled for server-side encryption (SSE), grant Amazon RDS the necessary permissions to access the AWS KMS key. For more information, see [ Enable compatibility between event sources from AWS services and encrypted topics](https://docs.aws.amazon.com/sns/latest/dg/sns-key-management.html#compatibility-with-aws-services) in the *Amazon Simple Notification Service Developer Guide*.

   1. For **Source type**, choose a source type. For example, choose **Instances** or **Parameter groups**.

   1. Choose the event categories and resources that you want to receive event notifications for.

      The following example configures event notifications for the DB instance named `testinst`.  
![\[Enter source type\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/event-source.png)

   1. Choose **Create**.

The Amazon RDS console indicates that the subscription is being created.

![\[List DB event notification subscriptions\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/EventNotification-Create2.png)


## AWS CLI
<a name="USER_Events.Subscribing.CLI"></a>

To subscribe to RDS event notification, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/create-event-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-event-subscription.html) command. Include the following required parameters:
+ `--subscription-name`
+ `--sns-topic-arn`

**Example**  
For Linux, macOS, or Unix:  

```
aws rds create-event-subscription \
    --subscription-name myeventsubscription \
    --sns-topic-arn arn:aws:sns:us-east-1:123456789012:myawsuser-RDS \
    --enabled
```
For Windows:  

```
aws rds create-event-subscription ^
    --subscription-name myeventsubscription ^
    --sns-topic-arn arn:aws:sns:us-east-1:123456789012:myawsuser-RDS ^
    --enabled
```

## API
<a name="USER_Events.Subscribing.API"></a>

To subscribe to Amazon RDS event notification, call the Amazon RDS API function [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateEventSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateEventSubscription.html). Include the following required parameters: 
+ `SubscriptionName`
+ `SnsTopicArn`

# Amazon RDS event notification tags and attributes
<a name="USER_Events.TagsAttributesForFiltering"></a>

When Amazon RDS sends an event notification to Amazon Simple Notification Service (SNS) or Amazon EventBridge, the notification contains message attributes and event tags. RDS sends the message attributes separately along with the message, while the event tags are in the body of the message. Use the message attributes and the Amazon RDS tags to add metadata to your resources. You can modify these tags with your own notations about the DB instances. For more information about tagging Amazon RDS resources, see [Tagging Amazon RDS resources](USER_Tagging.md). 

By default, the Amazon SNS and Amazon EventBridge receives every message sent to them. SNS and EventBridge can filter the message and send the notifications to the preferred communication mode, such as an email, a text message, or a call to an HTTP endpoint.

**Note**  
The notification sent in an email or a text message will not have event tags.

The following table shows the message attributes for RDS events sent to the topic subscriber.


| Amazon RDS event attribute |  Description  | 
| --- | --- | 
| EventID |  Identifier for the RDS event message, for example, RDS-EVENT-0006.  | 
| Resource |  The ARN identifier for the resource emitting the event, for example, `arn:aws:rds:ap-southeast-2:123456789012:db:database-1`.  | 

The RDS tags provide data about the resource that was affected by the service event. RDS adds the current state of the tags in the message body when the notification is sent to SNS or EventBridge.

For more information about filtering message attributes for SNS, see [Amazon SNS message filtering](https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html) in the *Amazon Simple Notification Service Developer Guide*.

For more information about filtering event tags for EventBridge, see [ Comparison operators for use in event patterns in Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns-content-based-filtering.html) in the *Amazon EventBridge User Guide*.

For more information about filtering payload-based tags for SNS, see [Introducing payload-based message filtering for Amazon SNS](https://aws.amazon.com/blogs/compute/introducing-payload-based-message-filtering-for-amazon-sns/)

# Listing Amazon RDS event notification subscriptions
<a name="USER_Events.ListSubscription"></a>

You can list your current Amazon RDS event notification subscriptions.

## Console
<a name="USER_Events.ListSubscription.Console"></a>

**To list your current Amazon RDS event notification subscriptions**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Event subscriptions**. The **Event subscriptions** pane shows all your event notification subscriptions.  
![\[List DB event notification subscriptions\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/EventNotification-ListSubs.png)

   

## AWS CLI
<a name="USER_Events.ListSubscription.CLI"></a>

To list your current Amazon RDS event notification subscriptions, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-subscriptions.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-subscriptions.html) command. 

**Example**  
The following example describes all event subscriptions.  

```
aws rds describe-event-subscriptions
```
The following example describes the `myfirsteventsubscription`.  

```
aws rds describe-event-subscriptions --subscription-name myfirsteventsubscription
```

## API
<a name="USER_Events.ListSubscription.API"></a>

To list your current Amazon RDS event notification subscriptions, call the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventSubscriptions.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventSubscriptions.html) action.

# Modifying an Amazon RDS event notification subscription
<a name="USER_Events.Modifying"></a>

After you have created a subscription, you can change the subscription name, source identifier, categories, or topic ARN.

## Console
<a name="USER_Events.Modifying.Console"></a>

**To modify an Amazon RDS event notification subscription**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Event subscriptions**. 

1.  In the **Event subscriptions** pane, choose the subscription that you want to modify and choose **Edit**. 

1.  Make your changes to the subscription in either the **Target** or **Source** section.

1. Choose **Edit**. The Amazon RDS console indicates that the subscription is being modified.  
![\[List DB event notification subscriptions\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/EventNotification-Modify2.png)

   

## AWS CLI
<a name="USER_Events.Modifying.CLI"></a>

To modify an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-event-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-event-subscription.html) command. Include the following required parameter:
+ `--subscription-name`

**Example**  
The following code enables `myeventsubscription`.  
For Linux, macOS, or Unix:  

```
aws rds modify-event-subscription \
    --subscription-name myeventsubscription \
    --enabled
```
For Windows:  

```
aws rds modify-event-subscription ^
    --subscription-name myeventsubscription ^
    --enabled
```

## API
<a name="USER_Events.Modifying.API"></a>

To modify an Amazon RDS event, call the Amazon RDS API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyEventSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyEventSubscription.html). Include the following required parameter:
+ `SubscriptionName`

# Adding a source identifier to an Amazon RDS event notification subscription
<a name="USER_Events.AddingSource"></a>

You can add a source identifier (the Amazon RDS source generating the event) to an existing subscription.

## Console
<a name="USER_Events.AddingSource.Console"></a>

You can easily add or remove source identifiers using the Amazon RDS console by selecting or deselecting them when modifying a subscription. For more information, see [Modifying an Amazon RDS event notification subscription](USER_Events.Modifying.md).

## AWS CLI
<a name="USER_Events.AddingSource.CLI"></a>

To add a source identifier to an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/](https://docs.aws.amazon.com/) command. Include the following required parameters:
+ `--subscription-name`
+ `--source-identifier`

**Example**  
The following example adds the source identifier `mysqldb` to the `myrdseventsubscription` subscription.  
For Linux, macOS, or Unix:  

```
aws rds add-source-identifier-to-subscription \
    --subscription-name myrdseventsubscription \
    --source-identifier mysqldb
```
For Windows:  

```
aws rds add-source-identifier-to-subscription ^
    --subscription-name myrdseventsubscription ^
    --source-identifier mysqldb
```

## API
<a name="USER_Events.AddingSource.API"></a>

To add a source identifier to an Amazon RDS event notification subscription, call the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddSourceIdentifierToSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddSourceIdentifierToSubscription.html). Include the following required parameters:
+ `SubscriptionName`
+ `SourceIdentifier`

# Removing a source identifier from an Amazon RDS event notification subscription
<a name="USER_Events.RemovingSource"></a>

You can remove a source identifier (the Amazon RDS source generating the event) from a subscription if you no longer want to be notified of events for that source. 

## Console
<a name="USER_Events.RemovingSource.Console"></a>

You can easily add or remove source identifiers using the Amazon RDS console by selecting or deselecting them when modifying a subscription. For more information, see [Modifying an Amazon RDS event notification subscription](USER_Events.Modifying.md).

## AWS CLI
<a name="USER_Events.RemovingSource.CLI"></a>

To remove a source identifier from an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/remove-source-identifier-from-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/remove-source-identifier-from-subscription.html) command. Include the following required parameters:
+ `--subscription-name`
+ `--source-identifier`

**Example**  
The following example removes the source identifier `mysqldb` from the `myrdseventsubscription` subscription.  
For Linux, macOS, or Unix:  

```
aws rds remove-source-identifier-from-subscription \
    --subscription-name myrdseventsubscription \
    --source-identifier mysqldb
```
For Windows:  

```
aws rds remove-source-identifier-from-subscription ^
    --subscription-name myrdseventsubscription ^
    --source-identifier mysqldb
```

## API
<a name="USER_Events.RemovingSource.API"></a>

To remove a source identifier from an Amazon RDS event notification subscription, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveSourceIdentifierFromSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveSourceIdentifierFromSubscription.html) command. Include the following required parameters:
+ `SubscriptionName`
+ `SourceIdentifier`

# Listing the Amazon RDS event notification categories
<a name="USER_Events.ListingCategories"></a>

All events for a resource type are grouped into categories. To view the list of categories available, use the following procedures.

## Console
<a name="USER_Events.ListingCategories.Console"></a>

When you create or modify an event notification subscription, the event categories are displayed in the Amazon RDS console. For more information, see [Modifying an Amazon RDS event notification subscription](USER_Events.Modifying.md). 

![\[List DB event notification categories\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/EventNotification-Categories.png)




## AWS CLI
<a name="USER_Events.ListingCategories.CLI"></a>

To list the Amazon RDS event notification categories, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-categories.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-event-categories.html) command. This command has no required parameters.

**Example**  

```
aws rds describe-event-categories
```

## API
<a name="USER_Events.ListingCategories.API"></a>

To list the Amazon RDS event notification categories, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventCategories.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeEventCategories.html) command. This command has no required parameters.

# Deleting an Amazon RDS event notification subscription
<a name="USER_Events.Deleting"></a>

You can delete a subscription when you no longer need it. All subscribers to the topic will no longer receive event notifications specified by the subscription.

## Console
<a name="USER_Events.Deleting.Console"></a>

**To delete an Amazon RDS event notification subscription**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **DB Event Subscriptions**. 

1.  In the **My DB Event Subscriptions** pane, choose the subscription that you want to delete. 

1. Choose **Delete**.

1. The Amazon RDS console indicates that the subscription is being deleted.  
![\[Delete an event notification subscription\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/EventNotification-Delete.png)

   

## AWS CLI
<a name="USER_Events.Deleting.CLI"></a>

To delete an Amazon RDS event notification subscription, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/delete-event-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/rds/delete-event-subscription.html) command. Include the following required parameter:
+ `--subscription-name`

**Example**  
The following example deletes the subscription `myrdssubscription`.  

```
aws rds delete-event-subscription --subscription-name myrdssubscription
```

## API
<a name="USER_Events.Deleting.API"></a>

To delete an Amazon RDS event notification subscription, use the RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteEventSubscription.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteEventSubscription.html) command. Include the following required parameter:
+ `SubscriptionName`

# Creating a rule that triggers on an Amazon RDS event
<a name="rds-cloud-watch-events"></a>

Using Amazon EventBridge, you can automate AWS services and respond to system events such as application availability issues or resource changes. 

**Topics**
+ [Creating rules to send Amazon RDS events to Amazon EventBridge](#rds-cloudwatch-events.sending-to-cloudwatch-events)
+ [Tutorial: Log DB instance state changes using Amazon EventBridge](#log-rds-instance-state)

## Creating rules to send Amazon RDS events to Amazon EventBridge
<a name="rds-cloudwatch-events.sending-to-cloudwatch-events"></a>

You can write simple rules to indicate which Amazon RDS events interest you and which automated actions to take when an event matches a rule. You can set a variety of targets, such as an AWS Lambda function or an Amazon SNS topic, which receive events in JSON format. For example, you can configure Amazon RDS to send events to Amazon EventBridge whenever a DB instance is created or deleted. For more information, see the [Amazon CloudWatch Events User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/) and the [Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/).

**To create a rule that triggers on an RDS event:**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. Under **Events** in the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. For **Event Source**, do the following:

   1. Choose **Event Pattern**.

   1. For **Service Name**, choose **Relational Database Service (RDS)**.

   1. For **Event Type**, choose the type of Amazon RDS resource that triggers the event. For example, if a DB instance triggers the event, choose **RDS DB Instance Event**.

1. For **Targets**, choose **Add Target** and choose the AWS service that is to act when an event of the selected type is detected. 

1. In the other fields in this section, enter information specific to this target type, if any is needed. 

1. For many target types, EventBridge needs permissions to send events to the target. In these cases, EventBridge can create the IAM role needed for your event to run: 
   + To create an IAM role automatically, choose **Create a new role for this specific resource**.
   + To use an IAM role that you created before, choose **Use existing role**.

1. Optionally, repeat steps 5-7 to add another target for this rule.

1. Choose **Configure details**. For **Rule definition**, type a name and description for the rule.

   The rule name must be unique within this Region.

1. Choose **Create rule**.

For more information, see [Creating an EventBridge Rule That Triggers on an Event](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html) in the *Amazon CloudWatch User Guide*.

## Tutorial: Log DB instance state changes using Amazon EventBridge
<a name="log-rds-instance-state"></a>

In this tutorial, you create an AWS Lambda function that logs the state changes for an Amazon RDS instance. You then create a rule that runs the function whenever there is a state change of an existing RDS DB instance. The tutorial assumes that you have a small running test instance that you can shut down temporarily.

**Important**  
Don't perform this tutorial on a running production DB instance.

**Topics**
+ [Step 1: Create an AWS Lambda function](#rds-create-lambda-function)
+ [Step 2: Create a rule](#rds-create-rule)
+ [Step 3: Test the rule](#rds-test-rule)

### Step 1: Create an AWS Lambda function
<a name="rds-create-lambda-function"></a>

Create a Lambda function to log the state change events. You specify this function when you create your rule.

**To create a Lambda function**

1. Open the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/).

1. If you're new to Lambda, you see a welcome page. Choose **Get Started Now**. Otherwise, choose **Create function**.

1. Choose **Author from scratch**.

1. On the **Create function** page, do the following:

   1. Enter a name and description for the Lambda function. For example, name the function **RDSInstanceStateChange**. 

   1. In **Runtime**, select **Node.js 16x**. 

   1. For **Architecture**, choose **x86\$164**.

   1. For **Execution role**, do either of the following:
      + Choose **Create a new role with basic Lambda permissions**.
      + For **Existing role**, choose **Use an existing role**. Choose the role that you want to use. 

   1. Choose **Create function**.

1. On the **RDSInstanceStateChange** page, do the following:

   1. In **Code source**, select **index.js**. 

   1. In the **index.js** pane, delete the existing code.

   1. Enter the following code:

      ```
      console.log('Loading function');
      
      exports.handler = async (event, context) => {
          console.log('Received event:', JSON.stringify(event));
      };
      ```

   1. Choose **Deploy**.

### Step 2: Create a rule
<a name="rds-create-rule"></a>

Create a rule to run your Lambda function whenever you launch an Amazon RDS instance.

**To create the EventBridge rule**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule. For example, enter **RDSInstanceStateChangeRule**.

1. Choose **Rule with an event pattern**, and then choose **Next**.

1. For **Event source**, choose **AWS events or EventBridge partner events**.

1. Scroll down to the **Event pattern** section.

1. For **Event source**, choose **AWS services**.

1. For **AWS service**, choose **Relational Database Service (RDS)**.

1. For **Event type**, choose **RDS DB Instance Event**.

1. Leave the default event pattern. Then choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **Lambda function**.

1. For **Function**, choose the Lambda function that you created. Then choose **Next**.

1. In **Configure tags**, choose **Next**.

1. Review the steps in your rule. Then choose **Create rule**.

### Step 3: Test the rule
<a name="rds-test-rule"></a>

To test your rule, shut down an RDS DB instance. After waiting a few minutes for the instance to shut down, verify that your Lambda function was invoked.

**To test your rule by stopping a DB instance**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Stop an RDS DB instance.

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**, choose the name of the rule that you created.

1. In **Rule details**, choose **Monitoring**.

   You are redirected to the Amazon CloudWatch console. If you are not redirected, click **View the metrics in CloudWatch**.

1. In **All metrics**, choose the name of the rule that you created.

   The graph should indicate that the rule was invoked.

1. In the navigation pane, choose **Log groups**.

1. Choose the name of the log group for your Lambda function (**/aws/lambda/*function-name***).

1. Choose the name of the log stream to view the data provided by the function for the instance that you launched. You should see a received event similar to the following:

   ```
   {
       "version": "0",
       "id": "12a345b6-78c9-01d2-34e5-123f4ghi5j6k",
       "detail-type": "RDS DB Instance Event",
       "source": "aws.rds",
       "account": "111111111111",
       "time": "2021-03-19T19:34:09Z",
       "region": "us-east-1",
       "resources": [
           "arn:aws:rds:us-east-1:111111111111:db:testdb"
       ],
       "detail": {
           "EventCategories": [
               "notification"
           ],
           "SourceType": "DB_INSTANCE",
           "SourceArn": "arn:aws:rds:us-east-1:111111111111:db:testdb",
           "Date": "2021-03-19T19:34:09.293Z",
           "Message": "DB instance stopped",
           "SourceIdentifier": "testdb",
           "EventID": "RDS-EVENT-0087"
       }
   }
   ```

   For more examples of RDS events in JSON format, see [Overview of events for Amazon RDS](working-with-events.md#rds-cloudwatch-events.sample).

1. (Optional) When you're finished, you can open the Amazon RDS console and start the instance that you stopped.

# Amazon RDS event categories and event messages
<a name="USER_Events.Messages"></a>

Amazon RDS generates a significant number of events in categories that you can subscribe to using the Amazon RDS Console, AWS CLI, or the API.

**Topics**
+ [DB cluster events](#USER_Events.Messages.cluster)
+ [DB cluster snapshot events](#USER_Events.Messages.cluster-snapshot)
+ [DB instance events](#USER_Events.Messages.instance)
+ [DB parameter group events](#USER_Events.Messages.parameter-group)
+ [DB security group events](#USER_Events.Messages.security-group)
+ [DB snapshot events](#USER_Events.Messages.snapshot)
+ [RDS Proxy events](#USER_Events.Messages.rds-proxy)
+ [Blue/green deployment events](#USER_Events.Messages.BlueGreenDeployments)
+ [Custom engine version events](#USER_Events.Messages.CEV)

## DB cluster events
<a name="USER_Events.Messages.cluster"></a>

The following table shows the event category and a list of events when a DB cluster is the source type.

For more information about Multi-AZ DB cluster deployments, see [Multi-AZ DB cluster deployments for Amazon RDS](multi-az-db-clusters-concepts.md).


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  configuration change  | RDS-EVENT-0016 |  Reset master credentials.  | None | 
| creation | RDS-EVENT-0170 |  DB cluster created.  |  None  | 
|  failover  | RDS-EVENT-0069 |  Cluster failover failed, check the health of your cluster instances and try again.  |  None  | 
|  failover  | RDS-EVENT-0070 |  Promoting previous primary again: *name*.  |  None  | 
|  failover  | RDS-EVENT-0071 |  Completed failover to DB instance: *name*.  |  None  | 
|  failover  | RDS-EVENT-0072 |  Started same AZ failover to DB instance: *name*.  |  None  | 
|  failover  | RDS-EVENT-0073 |  Started cross AZ failover to DB instance: *name*.  |  None  | 
| failure | RDS-EVENT-0354 |  You can't create the DB cluster because of incompatible resources. *message*.  |  The *message* includes details about the failure.  | 
| failure | RDS-EVENT-0355 |  The DB cluster can't be created because of insufficient resource limits. *message*.  |  The *message* includes details about the failure.  | 
|  maintenance  | RDS-EVENT-0156 |  The DB cluster has a DB engine minor version upgrade available.  |  None  | 
|  maintenance  | RDS-EVENT-0173 |  Database cluster engine version has been upgraded.  | Patching of the DB cluster has completed. | 
|  maintenance  | RDS-EVENT-0174 |  Database cluster is in a state that cannot be upgraded.  | None | 
|  maintenance  | RDS-EVENT-0176 |  Database cluster engine major version has been upgraded.  | None | 
|  maintenance  | RDS-EVENT-0177 |  Database cluster upgrade is in progress.  | None | 
|  maintenance  | RDS-EVENT-0286 |  Database cluster engine *version\$1number* version upgrade started. Cluster remains online.  | None | 
|  maintenance  | RDS-EVENT-0287 |  Operating system upgrade requirement detected.  | None | 
|  maintenance  | RDS-EVENT-0288 |  Cluster operating system upgrade starting.  | None | 
|  maintenance  | RDS-EVENT-0289 |  Cluster operating system upgrade completed.  | None | 
|  maintenance  | RDS-EVENT-0290 |  Database cluster has been patched: source version *version\$1number* => *new\$1version\$1number*.  | None | 
|  maintenance  | RDS-EVENT-0410 |  The pre-check started for the database cluster engine version upgrade.  | None | 
|  maintenance  | RDS-EVENT-0412 |  The pre-check for the database cluster engine version upgrade failed or timed out.  | None | 
|  maintenance  | RDS-EVENT-0413 |  The DB cluster pre-upgrade tasks are in progress.  | None | 
|  maintenance  | RDS-EVENT-0414 |  The DB cluster post-upgrade tasks are in progress.  | None | 
|  maintenance  | RDS-EVENT-0417 |  Database cluster engine version upgrade started.  | None | 
|  notification  | RDS-EVENT-0172 |  Renamed cluster from *name* to *name*.  |  None  | 
|  read replica  | RDS-EVENT-0411 |  The pre-check finished for the database cluster engine version upgrade.  | None | 

## DB cluster snapshot events
<a name="USER_Events.Messages.cluster-snapshot"></a>

The following table shows the event category and a list of events when a DB cluster snapshot is the source type.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html)

## DB instance events
<a name="USER_Events.Messages.instance"></a>

The following table shows the event category and a list of events when a DB instance is the source type.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  availability  | RDS-EVENT-0004 |  DB instance shutdown.  | None | 
|  availability  | RDS-EVENT-0006 |  DB instance restarted.  | None | 
|  availability  | RDS-EVENT-0022 |  Error restarting mysql: *message*.  | An error has occurred while restarting MySQL. | 
|  availability  | RDS-EVENT-0221 |  DB instance has reached the storage-full threshold, and the database has been shut down. You can increase the allocated storage to address this issue.  | None | 
|  availability  | RDS-EVENT-0222 |  Free storage capacity for DB instance *name* is low at *percentage* of the allocated storage [Allocated storage: *amount*, Free storage: *amount*]. The database will be shut down to prevent corruption if free storage is lower than *amount*. You can increase the allocated storage to address this issue.  | Applies only to RDS for MySQL when a DB instance consumes more than 90% of the allocated storage. Monitor the storage space for a DB instance using the Free Storage Space metric. For more information, see [Amazon RDS DB instance storage](CHAP_Storage.md). | 
|  availability  | RDS-EVENT-0330 |  The free storage capacity of the dedicated transaction log volume is too low for DB instance *name*. The log volume free storage is *percentage* of the allocated storage. [Allocated storage: *amount*, Free storage: *amount*] The database will be shut down to prevent corruption if the free storage is lower than *amount*. You can disable the dedicated transaction log volume to resolve this issue.  |  For more information, see [Dedicated log volume (DLV)](CHAP_Storage.md#CHAP_Storage.dlv).  | 
|  availability  | RDS-EVENT-0331 |  The free storage capacity of the dedicated transaction log volume is too low for DB instance *name*. The log volume free storage is *percentage* of the provisioned storage. [Provisioned Storage: *amount*, Free Storage: *amount*] You can disable the dedicated transaction log volume to resolve this issue.  |  For more information, see [Dedicated log volume (DLV)](CHAP_Storage.md#CHAP_Storage.dlv).  | 
|  availability  | RDS-EVENT-0396 |  Amazon RDS has scheduled a reboot for this read replica in this instance's next maintenance window after internal user password rotation.  |  None  | 
| availability | RDS-EVENT-0419 | Amazon RDS has been unable to access the KMS encryption key for database instance *name*. This database will be placed into an inaccessible state. Please refer to the troubleshooting section in the Amazon RDS documentation for further details. | None | 
|  backup  | RDS-EVENT-0001 |  Backing up DB instance.  | None | 
|  backup  | RDS-EVENT-0002 |  Finished DB instance backup.  | None | 
|  backup  | RDS-EVENT-0086 |  We are unable to associate the option group *name* with the database instance *name*. Confirm that option group *name* is supported on your DB instance class and configuration. If so, verify all option group settings and retry.  |  For more information see [Working with option groups](USER_WorkingWithOptionGroups.md). | 
|  configuration change  | RDS-EVENT-0011 |  Updated to use DBParameterGroup *name*.  | None | 
|  configuration change  | RDS-EVENT-0012 |  Applying modification to database instance class.   | None | 
|  configuration change  | RDS-EVENT-0014 |  Finished applying modification to DB instance class.  | None | 
|  configuration change  | RDS-EVENT-0016 |  Reset master credentials.  | None | 
|  configuration change  | RDS-EVENT-0017 |  Finished applying modification to allocated storage.  | None | 
|  configuration change  | RDS-EVENT-0018 |  Applying modification to allocated storage.  | None | 
|  configuration change  | RDS-EVENT-0024 |  Applying modification to convert to a Multi-AZ DB instance.  | None | 
|  configuration change  | RDS-EVENT-0025 |  Finished applying modification to convert to a Multi-AZ DB instance.  | None | 
|  configuration change  | RDS-EVENT-0028 |  Disabled automated backups.  | None | 
|  configuration change  | RDS-EVENT-0029 |  Finished applying modification to convert to a standard (Single-AZ) DB instance.  | None | 
|  configuration change  | RDS-EVENT-0030 |  Applying modification to convert to a standard (Single-AZ) DB instance.  | None | 
|  configuration change  | RDS-EVENT-0032 |  Enabled automated backups.  | None | 
|  configuration change  | RDS-EVENT-0033 |  There are *number* users matching the master username; only resetting the one not tied to a specific host.  | None | 
|  configuration change  | RDS-EVENT-0067 |  Unable to reset your password. Error information: *message*.  | None | 
|  configuration change  | RDS-EVENT-0078 |  Monitoring Interval changed to *number*.  |  The Enhanced Monitoring configuration has been changed. | 
|  configuration change  | RDS-EVENT-0092 |  Finished updating DB parameter group.  | None | 
|  configuration change  | RDS-EVENT-0217 |  Applying autoscaling-initiated modification to allocated storage.  | None | 
|  configuration change  | RDS-EVENT-0218 |  Finished applying autoscaling-initiated modification to allocated storage.  | None | 
|  configuration change  | RDS-EVENT-0295 |  Storage configuration upgrade started.  | None | 
|  configuration change  | RDS-EVENT-0296 |  Storage configuration upgrade completed.  | None | 
|  configuration change  | RDS-EVENT-0332 |  The dedicated log volume is disabled.  |  For more information, see [Dedicated log volume (DLV)](CHAP_Storage.md#CHAP_Storage.dlv).  | 
|  configuration change  | RDS-EVENT-0333 |  Disabling the dedicated log volume has started.  |  For more information, see [Dedicated log volume (DLV)](CHAP_Storage.md#CHAP_Storage.dlv).  | 
|  configuration change  | RDS-EVENT-0334 |  Enabling the dedicated log volume has started.  |  For more information, see [Dedicated log volume (DLV)](CHAP_Storage.md#CHAP_Storage.dlv).  | 
|  configuration change  | RDS-EVENT-0335 |  The dedicated log volume is enabled.  |  For more information, see [Dedicated log volume (DLV)](CHAP_Storage.md#CHAP_Storage.dlv).  | 
|  configuration change  | RDS-EVENT-0383 |  *engine version* doesn't support the memcached plugin. RDS will continue upgrading your DB instance and remove this plugin.  |  Starting with MySQL 8.3.0, the memcached plugin isn't supported. For more information, see [Changes in MySQL 8.3.0 (2024-01-16, Innovation Release)](https://dev.mysql.com/doc/relnotes/mysql/8.3/en/news-8-3-0.html).  | 
|  creation  | RDS-EVENT-0005 |  DB instance created.  | None | 
|  deletion  | RDS-EVENT-0003 |  DB instance deleted.  | None | 
|  failover  | RDS-EVENT-0013 |  Multi-AZ instance failover started.  | A Multi-AZ failover that resulted in the promotion of a standby DB instance has started. | 
|  failover  | RDS-EVENT-0015 |  Multi-AZ failover to standby complete - DNS propagation may take a few minutes.  | A Multi-AZ failover that resulted in the promotion of a standby DB instance is complete. It may take several minutes for the DNS to transfer to the new primary DB instance. | 
|  failover  | RDS-EVENT-0034 |  Abandoning user requested failover since a failover recently occurred on the database instance.  | Amazon RDS isn't attempting a requested failover because a failover recently occurred on the DB instance. | 
|  failover  | RDS-EVENT-0049 | Multi-AZ instance failover completed. | None | 
|  failover  | RDS-EVENT-0050 |  Multi-AZ instance activation started.  | A Multi-AZ activation has started after a successful DB instance recovery. This event occurs if Amazon RDS promotes the primary DB instance to the same AZ as the previous primary DB instance. | 
|  failover  | RDS-EVENT-0051 |  Multi-AZ instance activation completed.  | A Multi-AZ activation is complete. Your database should be accessible now.  | 
|  failover  | RDS-EVENT-0065 |  Recovered from partial failover.  | None | 
|  failure  | RDS-EVENT-0031 |  DB instance put into *name* state. RDS recommends that you initiate a point-in-time-restore.  | The DB instance has failed due to an incompatible configuration or an underlying storage issue. Begin a point-in-time-restore for the DB instance. | 
|  failure  | RDS-EVENT-0035 |  Database instance put into *state*. *message*.  | The DB instance has invalid parameters. For example, if the DB instance could not start because a memory-related parameter is set too high for this instance class, your action would be to modify the memory parameter and reboot the DB instance. | 
|  failure  | RDS-EVENT-0036 |  Database instance in *state*. *message*.  | The DB instance is in an incompatible network. Some of the specified subnet IDs are invalid or do not exist. | 
|  failure  | RDS-EVENT-0058 |  The Statspack installation failed. *message*.  | Error while creating Oracle Statspack user account `PERFSTAT`. Drop the account before you add the `STATSPACK` option. | 
|  failure  | RDS-EVENT-0079 |  Amazon RDS has been unable to create credentials for enhanced monitoring and this feature has been disabled. This is likely due to the rds-monitoring-role not being present and configured correctly in your account. Please refer to the troubleshooting section in the Amazon RDS documentation for further details.  |  Enhanced Monitoring can't be enabled without the Enhanced Monitoring IAM role. For information about creating the IAM role, see [To create an IAM role for Amazon RDS enhanced monitoring](USER_Monitoring.OS.Enabling.md#USER_Monitoring.OS.IAMRole).  | 
|  failure  | RDS-EVENT-0080 |  Amazon RDS has been unable to configure enhanced monitoring on your instance: *name* and this feature has been disabled. This is likely due to the rds-monitoring-role not being present and configured correctly in your account. Please refer to the troubleshooting section in the Amazon RDS documentation for further details.  |  Enhanced Monitoring was disabled because an error occurred during the configuration change. It is likely that the Enhanced Monitoring IAM role is configured incorrectly. For information about creating the enhanced monitoring IAM role, see [To create an IAM role for Amazon RDS enhanced monitoring](USER_Monitoring.OS.Enabling.md#USER_Monitoring.OS.IAMRole).  | 
|  failure  | RDS-EVENT-0081 |  Amazon RDS has been unable to create credentials for *name* option. This is due to the *name* IAM role not being configured correctly in your account. Please refer to the troubleshooting section in the Amazon RDS documentation for further details.  |  The IAM role that you use to access your Amazon S3 bucket for SQL Server native backup and restore is configured incorrectly. For more information, see [Setting up for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md).  | 
|  failure  | RDS-EVENT-0165 |  The RDS Custom DB instance is outside the support perimeter.  |  It's your responsibility to fix configuration issues that put your RDS Custom DB instance into the `unsupported-configuration` state. If the issue is with the AWS infrastructure, you can use the console or the AWS CLI to fix it. If the issue is with the operating system or the database configuration, you can log in to the host to fix it.For more information, see [RDS Custom support perimeter](custom-concept.md#custom-troubleshooting.support-perimeter). | 
|  failure  | RDS-EVENT-0188 |  The DB instance is in a state that can't be upgraded. *message*  |  Amazon RDS was unable to upgrade a MySQL DB instance because of incompatibilities related to the data dictionary. The DB instance was rolled back to MySQL version 5.7 because an attempted upgrade to version 8.0 failed, or rolled back to MySQL version 8.0 because an attempted upgrade to version 8.4 failed. For more information, see [Rollback after failure to upgrade](USER_UpgradeDBInstance.MySQL.Major.md#USER_UpgradeDBInstance.MySQL.Major.RollbackAfterFailure).  | 
|  failure  | RDS-EVENT-0219 |  DB instance is in an invalid state. No actions are necessary. Autoscaling will retry later.  | None | 
|  failure  | RDS-EVENT-0220 |  DB instance is in the cooling-off period for a previous scale storage operation. We're optimizing your DB instance. This takes at least 6 hours. No actions are necessary. Autoscaling will retry after the cooling-off period.  | None | 
|  failure  | RDS-EVENT-0223 |  Storage autoscaling is unable to scale the storage for the reason: *reason*.  | None | 
|  failure  | RDS-EVENT-0224 |  Storage autoscaling has triggered a pending scale storage task that will reach or exceed the maximum storage threshold. Increase the maximum storage threshold.  | None | 
|  failure  | RDS-EVENT-0237 |  DB instance has a storage type that's currently unavailable in the Availability Zone. Autoscaling will retry later.  | None | 
| failure | RDS-EVENT-0254 |  Underlying storage quota for this customer account has exceeded the limit. Please increase the allowed storage quota to let the scaling go through on the instance.  | None | 
|  failure  |  RDS-EVENT-0278  |  The DB instance creation failed. *message*  |  The *message* includes details about the failure.  | 
|  failure  |  RDS-EVENT-0279  |  The promotion of the RDS Custom read replica failed. *message*  |  The *message* includes details about the failure.  | 
|  failure  |  RDS-EVENT-0280  |  RDS Custom couldn't upgrade the DB instance because the pre-check failed. *message*  |  The *message* includes details about the failure.  | 
|  failure  |  RDS-EVENT-0281  |  RDS Custom couldn't modify the DB instance because the pre-check failed. *message*  |  The *message* includes details about the failure.  | 
|  failure  |  RDS-EVENT-0282  |  RDS Custom couldn't modify the DB instance because the Elastic IP permissions aren't correct. Please confirm the Elastic IP address is tagged with `AWSRDSCustom`.  |  None  | 
|  failure  |  RDS-EVENT-0283  |  RDS Custom couldn't modify the DB instance because the Elastic IP limit has been reached in your account. Release unused Elastic IPs or request a quota increase for your Elastic IP address limit.  |  None  | 
|  failure  |  RDS-EVENT-0284  |  RDS Custom couldn't convert the instance to high availability because the pre-check failed. *message*  |  The *message* includes details about the failure.  | 
|  failure  |  RDS-EVENT-0285  |  RDS Custom couldn't create a final snapshot for the DB instance because *message*.  |  The *message* includes details about the failure.  | 
|  failure  |  RDS-EVENT-0421  |  RDS Custom couldn't convert the DB instance to a Multi-AZ deployment: *message*. The instance will remain a Single-AZ deployment. See the RDS User Guide for information about Multi-AZ deployments for RDS Custom for Oracle.  |  The *message* includes details about the failure.  | 
|  failure  | RDS-EVENT-0306 |  Storage configuration upgrade failed. Please retry the upgrade.  | None | 
|  failure  | RDS-EVENT-0315 |  Unable to move incompatible-network database, *name*, to the available status: *message*  |  The database networking configuration is invalid. The database could not be moved from incompatible-network to available.  | 
| failure | RDS-EVENT-0328 |  Failed to join a host to a domain. Domain membership status for instance *instancename* has been set to Failed.  | None | 
| failure | RDS-EVENT-0329 |  Failed to join a host to your domain. During the domain join process, Microsoft Windows returned the error code *message*. Verify your network and permission configurations and issue a `modify-db-instance` request to re-attempt the domain join.  | When using a self-managed Active Directory, see [Troubleshooting self-managed Active Directory](USER_SQLServer_SelfManagedActiveDirectory.TroubleshootingSelfManagedActiveDirectory.md). | 
| failure | RDS-EVENT-0353 |  The DB instance can't be created because of insufficient resource limits. *message*.  |  The *message* includes details about the failure.  | 
| failure | RDS-EVENT-0356 |  RDS was unable to configure the Kerberos endpoint in your domain. This might prevent Kerberos authentication for your DB instance. Verify the network configuration between your DB instance and domain controllers.  | None | 
| failure | RDS-EVENT-0418 | Amazon RDS is unable to access the KMS encryption key for database instance *name*. This is likely due to the key being disabled or Amazon RDS being unable to access it. If this continues the database will be placed into an inaccessible state. Please refer to the troubleshooting section in the Amazon RDS documentation for further details. | None | 
| failure | RDS-EVENT-0420 | Amazon RDS can now successfully access the KMS encryption key for database instance *name*. | None | 
|  low storage  | RDS-EVENT-0007 |  Allocated storage has been exhausted. Allocate additional storage to resolve.  |  The allocated storage for the DB instance has been consumed. To resolve this issue, allocate additional storage for the DB instance. For more information, see the [RDS FAQ](https://aws.amazon.com/rds/faqs). You can monitor the storage space for a DB instance using the **Free Storage Space** metric.  | 
|  low storage  | RDS-EVENT-0089 |  The free storage capacity for DB instance: *name* is low at *percentage* of the provisioned storage [Provisioned Storage: *size*, Free Storage: *size*]. You may want to increase the provisioned storage to address this issue.  |  The DB instance has consumed more than 90% of its allocated storage. You can monitor the storage space for a DB instance using the **Free Storage Space** metric.  | 
|  low storage  | RDS-EVENT-0227 |  Your Aurora cluster's storage is dangerously low with only *amount* terabytes remaining. Please take measures to reduce the storage load on your cluster.  |  The Aurora storage subsystem is running low on space.  | 
|  maintenance  | RDS-EVENT-0026 |  Applying off-line patches to DB instance.  |  Offline maintenance of the DB instance is taking place. The DB instance is currently unavailable.  | 
|  maintenance  | RDS-EVENT-0027 |  Finished applying off-line patches to DB instance.  |  Offline maintenance of the DB instance is complete. The DB instance is now available.  | 
|  maintenance  | RDS-EVENT-0047 |  Database instance patched.  | None | 
|  maintenance  | RDS-EVENT-0155 |  The DB instance has a DB engine minor version upgrade available.  | None | 
|  maintenance  | RDS-EVENT-0178 |  Database instance upgrade is in progress.  | None | 
|  maintenance  | RDS-EVENT-0264 |  The pre-check started for the DB engine version upgrade.  | None | 
|  maintenance  | RDS-EVENT-0265 |  The pre-check finished for the DB engine version upgrade.  | None | 
|  maintenance  | RDS-EVENT-0266 |  The downtime started for the DB instance.  | None | 
|  maintenance  | RDS-EVENT-0267 |  The engine version upgrade started.  | None | 
|  maintenance  | RDS-EVENT-0268 |  The engine version upgrade finished. | None | 
|  maintenance  | RDS-EVENT-0269 |  The post-upgrade tasks are in progress. | None | 
|  maintenance  | RDS-EVENT-0270 |  The DB engine version upgrade failed. The engine version upgrade rollback succeeded. | None | 
|  maintenance  | RDS-EVENT-0398 |  Waiting for the DB engine version upgrade to finish on the primary DB instance. | Emitted on a read replica during a major engine version upgrade. | 
|  maintenance  | RDS-EVENT-0399 |  Waiting for the DB engine version upgrade to finish on the read replicas. | Emitted on source DB engine during a major engine version upgrade. | 
|  maintenance  | RDS-EVENT-0422 |  RDS will replace the host of DB instance *name* due to a pending maintenance action. | None | 
|  maintenance, failure  | RDS-EVENT-0195 |  *message*  |  The update of the Oracle time zone file failed. For more information, see [Oracle time zone file autoupgrade](Appendix.Oracle.Options.Timezone-file-autoupgrade.md).  | 
|  maintenance, notification  | RDS-EVENT-0191 |  A new version of the time zone file is available for update.  |  If you update your RDS for Oracle DB engine, Amazon RDS generates this event if you haven't chosen a time zone file upgrade and the database doesn’t use the latest DST time zone file available on the instance. For more information, see [Oracle time zone file autoupgrade](Appendix.Oracle.Options.Timezone-file-autoupgrade.md).  | 
|  maintenance, notification  | RDS-EVENT-0192 |  The update of your time zone file has started.  |  The upgrade of your Oracle time zone file has begun. For more information, see [Oracle time zone file autoupgrade](Appendix.Oracle.Options.Timezone-file-autoupgrade.md).  | 
|  maintenance, notification  | RDS-EVENT-0193 |  No update is available for the current time zone file version.  |  Your Oracle DB instance is using latest time zone file version, and either of the following statements is true: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html) For more information, see [Oracle time zone file autoupgrade](Appendix.Oracle.Options.Timezone-file-autoupgrade.md).  | 
|  maintenance, notification  | RDS-EVENT-0194 |  The update of your time zone file has finished.  |  The update of your Oracle time zone file has completed. For more information, see [Oracle time zone file autoupgrade](Appendix.Oracle.Options.Timezone-file-autoupgrade.md).  | 
|  notification  | RDS-EVENT-0044 |  *message*  | This is an operator-issued notification. For more information, see the event message. | 
|  notification  | RDS-EVENT-0048 |  Delaying database engine upgrade since this instance has read replicas that need to be upgraded first.  | Patching of the DB instance has been delayed. | 
|  notification  | RDS-EVENT-0054 |  *message*  | The MySQL storage engine you are using is not InnoDB, which is the recommended MySQL storage engine for Amazon RDS. For information about MySQL storage engines, see [Supported storage engines for RDS for MySQL](MySQL.Concepts.FeatureSupport.md#MySQL.Concepts.Storage). | 
|  notification  | RDS-EVENT-0055 |  *message*  |  The number of tables you have for your DB instance exceeds the recommended best practices for Amazon RDS. Reduce the number of tables on your DB instance. For information about recommended best practices, see [Amazon RDS basic operational guidelines](CHAP_BestPractices.md#CHAP_BestPractices.DiskPerformance).  | 
|  notification  | RDS-EVENT-0056 |  *message*  |  The number of databases you have for your DB instance exceeds the recommended best practices for Amazon RDS. Reduce the number of databases on your DB instance. For information about recommended best practices, see [Amazon RDS basic operational guidelines](CHAP_BestPractices.md#CHAP_BestPractices.DiskPerformance).  | 
|  notification  | RDS-EVENT-0064 |  The TDE encryption key was rotated successfully.  | For information about recommended best practices, see [Amazon RDS basic operational guidelines](CHAP_BestPractices.md#CHAP_BestPractices.DiskPerformance).  | 
|  notification  | RDS-EVENT-0084 |  Unable to convert the DB instance to Multi-AZ: *message*.  |  You attempted to convert a DB instance to Multi-AZ, but it contains in-memory file groups that are not supported for Multi-AZ. For more information, see [Multi-AZ deployments for Amazon RDS for Microsoft SQL Server](USER_SQLServerMultiAZ.md).   | 
|  notification  | RDS-EVENT-0087 |  DB instance stopped.   | None | 
|  notification  | RDS-EVENT-0088 |  DB instance started.  | None | 
|  notification  | RDS-EVENT-0154 |  DB instance is being started due to it exceeding the maximum allowed time being stopped.  | None | 
|  notification  | RDS-EVENT-0157 |  Unable to modify the DB instance class. *message*.  |  RDS can't modify the DB instance class because the target instance class can't support the number of databases that exist on the source DB instance. The error message appears as: "The instance has *N* databases, but after conversion it would only support *N*". For more information, see [Limitations for Microsoft SQL Server DB instances](CHAP_SQLServer.md#SQLServer.Concepts.General.FeatureSupport.Limits).  | 
|  notification  | RDS-EVENT-0158 |  Database instance is in a state that cannot be upgraded: *message*.  | None | 
|  notification  | RDS-EVENT-0167 |  *message*  |  The RDS Custom support perimeter configuration has changed.  | 
|  notification  | RDS-EVENT-0189 |  The gp2 burst balance credits for the RDS database instance are low. To resolve this issue, reduce IOPS usage or modify your storage settings to enable higher performance.  |  The gp2 burst balance credits for the RDS database instance are low. To resolve this issue, reduce IOPS usage or modify your storage settings to enable higher performance. For more information, see [I/O credits and burst performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#EBSVolumeTypes_gp2) in the *Amazon Elastic Compute Cloud User Guide*.  | 
|  notification  | RDS-EVENT-0225 |  Allocated storage size *amount* GB is approaching the maximum storage threshold *amount* GB. Increase the maximum storage threshold.  |  This event is invoked when the allocated storage reaches 80% of the maximum storage threshold. To avoid the event, increase the maximum storage threshold.  | 
|  notification  | RDS-EVENT-0231 |  Your DB instance's storage modification encountered an internal error. The modification request is pending and will be retried later.  |  An error has occurred in the read replication process. For more information, see the event message. In addition, see the troubleshooting section for read replicas for your DB engine. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html)  | 
|  notification  | RDS-EVENT-0253 |  The database is using the doublewrite buffer. *message*. For more information see the RDS Optimized Writes for *name* documentation.  | RDS Optimized Writes is incompatible with the instance storage configuration. For more information, see [Improving write performance with RDS Optimized Writes for MySQL](rds-optimized-writes.md) and [Improving write performance with Amazon RDS Optimized Writes for MariaDB](rds-optimized-writes-mariadb.md). You can perform storage configuration upgrade to enable Optimized Writes by [Creating a blue/green deployment](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-creating.html). | 
|  notification  | RDS-EVENT-0297 |  The storage configuration for DB instance *name* supports a maximum size of 16384 GiB. Perform a storage configuration upgrade to support storage sizes greater than 16384 GiB.  | You cannot increase the allocated storage size of the DB instance beyond 16384 GiB. To overcome this limitation, perform a storage configuration upgrade. For more information, see [Upgrading the storage file system for a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.UpgradeFileSystem).  | 
|  notification  | RDS-EVENT-0298 |  The storage configuration for DB instance *name* supports a maximum table size of 2048 GiB. Perform a storage configuration upgrade to support table sizes greater than 2048 GiB.  | RDS MySQL and MariaDB instances with this limitation cannot have a table size exceeding 2048 GiB. To overcome this limitation, perform a storage configuration upgrade. For more information, see [Upgrading the storage file system for a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.UpgradeFileSystem).  | 
|  notification  | RDS-EVENT-0327 |  Amazon RDS could not find the secret *SECRET ARN*. *message.*  | None | 
|  notification  | RDS-EVENT-0365 |  Timezone files were updated. Restart your RDS instance for the changes to take effect.  | None | 
|  notification  | RDS-EVENT-0385 |  Cluster topology is updated.  |  There are DNS changes to the DB cluster for the DB instance. This includes when new DB instances are added or deleted, or there's a failover.  | 
|  notification  | RDS-EVENT-0403 |  A database workload is causing the system to run critically low on memory. To help mitigate the issue, RDS automatically set the value of innodb\$1buffer\$1pool\$1size to *amount*.  |  Applies only to RDS for MySQL and RDS for MariaDB DB instances.  | 
|  notification  | RDS-EVENT-0404 |  A database workload is causing the system to run critically low on memory. To help mitigate the issue, RDS automatically set the value of shared\$1buffers to *amount*.  |  Applies only to RDS for PostgreSQL DB instances.  | 
|  read replica  | RDS-EVENT-0045 |  Replication has stopped.  |  This message appears when there is an error during replication. To determine the type of error, see [ Troubleshooting a MySQL read replica problem](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.Troubleshooting.html).  | 
|  read replica  | RDS-EVENT-0046 |  Replication for the Read Replica resumed.  | This message appears when you first create a read replica, or as a monitoring message confirming that replication is functioning properly. If this message follows an `RDS-EVENT-0045` notification, then replication has resumed following an error or after replication was stopped. | 
|  read replica  | RDS-EVENT-0057 |  Replication streaming has been terminated.  | None | 
|  read replica  | RDS-EVENT-0062 |  Replication for the Read Replica has been manually stopped.  | None | 
|  read replica  | RDS-EVENT-0063 |  Replication from Non RDS instance has been reset.  | None | 
|  read replica  | RDS-EVENT-0202 |  Read replica creation failed.  | None | 
|  read replica  | RDS-EVENT-0233 |  The Switchover to the read replica started.  | None | 
|  read replica  | RDS-EVENT-0357 |  Replication channel *name* started.  | For information about replication channels, see [Configuring multi-source-replication for Amazon RDS for MySQL](mysql-multi-source-replication.md). | 
|  read replica  | RDS-EVENT-0358 |  Replication channel *name* stopped.  | For information about replication channels, see [Configuring multi-source-replication for Amazon RDS for MySQL](mysql-multi-source-replication.md). | 
|  read replica  | RDS-EVENT-0359 |  Replication channel *name* was manually stopped.  | For information about replication channels, see [Configuring multi-source-replication for Amazon RDS for MySQL](mysql-multi-source-replication.md). | 
|  read replica  | RDS-EVENT-0360 |  Replication channel *name* was reset.  | For information about replication channels, see [Configuring multi-source-replication for Amazon RDS for MySQL](mysql-multi-source-replication.md). | 
|  read replica  | RDS-EVENT-0415 |  The upgrade process resumed replication on the read replica.  | None | 
|  read replica  | RDS-EVENT-0416 |  The upgrade process stopped replication on the read replica.  | None | 
|  recovery  | RDS-EVENT-0020 |  Recovery of the DB instance has started. Recovery time will vary with the amount of data to be recovered.  | None | 
|  recovery  | RDS-EVENT-0021 |  Recovery of the DB instance is complete.  | None | 
|  recovery  | RDS-EVENT-0023 |  Emergent Snapshot Request: *message*.  |  A manual backup has been requested but Amazon RDS is currently in the process of creating a DB snapshot. Submit the request again after Amazon RDS has completed the DB snapshot.  | 
|  recovery  | RDS-EVENT-0052 |  Multi-AZ instance recovery started.  | Recovery time will vary with the amount of data to be recovered. | 
|  recovery  | RDS-EVENT-0053 |  Multi-AZ instance recovery completed. Pending failover or activation.  | This message indicates that Amazon RDS has prepared your DB instance to initiate a failover to the secondary instance if necessary. | 
|  recovery  | RDS-EVENT-0066 |  Instance will be degraded while mirroring is reestablished: *message*.  |  The SQL Server DB instance is re-establishing its mirror. Performance will be degraded until the mirror is reestablished. A database was found with non-FULL recovery model. The recovery model was changed back to FULL and mirroring recovery was started. (<dbname>: <recovery model found>[,...])"  | 
|  recovery  | RDS-EVENT-0166 |  *message*  |  The RDS Custom DB instance is inside the support perimeter.  | 
|  recovery  | RDS-EVENT-0361 |  Recovery of standby DB instance has started.  |  The standby DB instance is rebuilt during the recovery process. Database performance is impacted during the recovery process.  | 
|  recovery  | RDS-EVENT-0362 |  Recovery of standby DB instance has completed.  |  The standby DB instance is rebuilt during the recovery process. Database performance is impacted during the recovery process.  | 
|  restoration  | RDS-EVENT-0019 |  Restored from DB instance *name* to *name*.  |  The DB instance has been restored from a point-in-time backup.  | 
|  security  | RDS-EVENT-0068 |  Decrypting hsm partition password to update instance.  |  RDS is decrypting the AWS CloudHSM partition password to make updates to the DB instance. For more information see [Oracle Database Transparent Data Encryption (TDE) with AWS CloudHSM](https://docs.aws.amazon.com/cloudhsm/latest/userguide/oracle-tde.html) in the *AWS CloudHSM User Guide*.  | 
|  security patching  | RDS-EVENT-0230 |  A system update is available for your DB instance. For information about applying updates, see 'Maintaining a DB instance' in the RDS User Guide.  |  A new Operating System update is available. A new, minor version, operating system update is available for your DB instance. For information about applying updates, see [Operating system updates for RDS DB instances](USER_UpgradeDBInstance.Maintenance.md#OS_Updates).  | 
|  maintenance  | RDS-EVENT-0425 |  Amazon RDS can't perform the OS upgrade because there are no available IP addresses in the specified subnets. Choose subnets with available IP addresses and try again.  |  None  | 
|  maintenance  | RDS-EVENT-0429 |  Amazon RDS can't perform the OS upgrade because of insufficient capacity available for the *type* instance type in the *zone* Availability Zone  |  None  | 
|  maintenance  | RDS-EVENT-0501 |  Amazon RDS DB instance's server certificate requires rotation through a pending maintenance action.  |  DB instance's server certificate requires rotation through a pending maintenance action. Amazon RDS reboots your database during this maintenance to complete the certificate rotation. To schedule this maintenance, go to the **Maintenance & backups** tab and choose **Apply now** or **Schedule for next maintenance window**. If the change is not scheduled, Amazon RDS automatically applies it in your mainteance window on the auto apply date shown in your maintenance action.  | 
|  maintenance  | RDS-EVENT-0502 |  Amazon RDS has scheduled a server certificate rotation for DB instance during the next maintenance window. This maintenance will require a database reboot.  |  None  | 

## DB parameter group events
<a name="USER_Events.Messages.parameter-group"></a>

The following table shows the event category and a list of events when a DB parameter group is the source type.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  configuration change  | RDS-EVENT-0037 |  Updated parameter *name* to *value* with apply method *method*.   |  None  | 

## DB security group events
<a name="USER_Events.Messages.security-group"></a>

The following table shows the event category and a list of events when a DB security group is the source type.

**Note**  
DB security groups are resources for EC2-Classic. EC2-Classic was retired on August 15, 2022. If you haven't migrated from EC2-Classic to a VPC, we recommend that you migrate as soon as possible. For more information, see [Migrate from EC2-Classic to a VPC](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-migrate.html) in the *Amazon EC2 User Guide* and the blog [ EC2-Classic Networking is Retiring – Here’s How to Prepare](https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/).


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  configuration change  | RDS-EVENT-0038 |  Applied change to security group.  |  None  | 
|  failure  | RDS-EVENT-0039 |  Revoking authorization as *user*.  |  The security group owned by *user* doesn't exist. The authorization for the security group has been revoked because it is invalid.  | 

## DB snapshot events
<a name="USER_Events.Messages.snapshot"></a>

The following table shows the event category and a list of events when a DB snapshot is the source type.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.Messages.html)

## RDS Proxy events
<a name="USER_Events.Messages.rds-proxy"></a>

The following table shows the event category and a list of events when an RDS Proxy is the source type.


|  Category  | RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
| configuration change | RDS-EVENT-0204 |  RDS modified DB proxy *name*.  | None | 
| configuration change | RDS-EVENT-0207 |  RDS modified the end point of the DB proxy *name*.  | None | 
| configuration change | RDS-EVENT-0213 |  RDS detected the addition of the DB instance and automatically added it to the target group of the DB proxy *name*.  | None | 
|  configuration change  | RDS-EVENT-0214 |  RDS detected deletion of DB instance *name* and automatically removed it from target group *name* of DB proxy *name*.  | None | 
|  configuration change  | RDS-EVENT-0215 |  RDS detected deletion of DB cluster *name* and automatically removed it from target group *name* of DB proxy *name*.  | None | 
|  creation  | RDS-EVENT-0203 |  RDS created DB proxy *name*.  | None | 
|  creation  | RDS-EVENT-0206 |  RDS created endpoint *name* for DB proxy *name*.  | None | 
| deletion | RDS-EVENT-0205 |  RDS deleted DB proxy *name*.  | None | 
|  deletion  | RDS-EVENT-0208 |  RDS deleted endpoint *name* for DB proxy *name*.  | None | 
|  failure  | RDS-EVENT-0243 |  RDS failed to provision capacity for proxy *name* because there aren't enough IP addresses available in your subnets: *name*. To fix the issue, make sure that your subnets have the minimum number of unused IP addresses as recommended in the RDS Proxy documentation.  |  To determine the recommended number for your instance class, see [Planning for IP address capacity](rds-proxy-network-prereqs.md#rds-proxy-network-prereqs.plan-ip-address).  | 
|  failure | RDS-EVENT-0275 |  RDS throttled some connections to DB proxy *name*. The number of simultaneous connection requests from the client to the proxy has exceeded the limit.  | None | 

## Blue/green deployment events
<a name="USER_Events.Messages.BlueGreenDeployments"></a>

The following table shows the event category and a list of events when a blue/green deployment is the source type.

For more information about blue/green deployments, see [Using Amazon RDS Blue/Green Deployments for database updates](blue-green-deployments.md).


|  Category  | Amazon RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  creation  | RDS-EVENT-0244 |  Blue/green deployment tasks completed. You can make more modifications to the green environment databases or switch over the deployment.  | None | 
|  failure  | RDS-EVENT-0245 |  Creation of blue/green deployment failed because *reason*.  | None | 
|  deletion  | RDS-EVENT-0246 |  Blue/green deployment deleted.  | None | 
|  notification  | RDS-EVENT-0247 |  Switchover from *blue* to *green* started.  | None | 
|  notification  | RDS-EVENT-0248 |  Switchover completed on blue/green deployment.  | None | 
|  failure  | RDS-EVENT-0249 |  Switchover canceled on blue/green deployment.  | None | 
|  notification  | RDS-EVENT-0250  |  Switchover from primary/read replica *blue* to *green* started.  | None | 
|  notification  | RDS-EVENT-0251  |  Switchover from primary/read replica *blue* to *green* completed. Renamed *blue* to *blue-old* and *green* to *blue*.  | None | 
|  failure  | RDS-EVENT-0252  |  Switchover from primary/read replica *blue* to *green* was canceled due to *reason*.  | None | 
|  notification  | RDS-EVENT-0307  |  Sequence sync for switchover of *blue* to *green* has initiated. Switchover when using sequences may lead to extended downtime.  | None | 
|  notification  | RDS-EVENT-0308  |  Sequence sync for switchover of *blue* to *green* has completed.  | None | 
|  failure  | RDS-EVENT-0310  |  Sequence sync for switchover of *blue* to *green* was cancelled because sequences failed to sync.  | None | 
| notification | RDS-EVENT-0405 |  Your storage volumes are being initialized.  |  None  | 
| notification | RDS-EVENT-0406 |  Your storage volumes have been initialized.  |  None  | 
|  notification  | RDS-EVENT-0409  |  *message*  | None | 

## Custom engine version events
<a name="USER_Events.Messages.CEV"></a>

The following table shows the event category and a list of events when a custom engine version is the source type.


|  Category  | Amazon RDS event ID |  Message  |  Notes  | 
| --- | --- | --- | --- | 
|  creation  | RDS-EVENT-0316 |  Preparing to create custom engine version *name*. The entire creation process may take up to four hours to complete.  | None | 
|  creation  | RDS-EVENT-0317 |  Creating custom engine version *name*.  | None | 
|  creation  | RDS-EVENT-0318 |  Validating custom engine version *name*.  | None | 
|  creation  | RDS-EVENT-0319 |  Custom engine version *name* has been created successfully.  | None | 
|  creation  | RDS-EVENT-0320 |  RDS can't create custom engine version *name* because of an internal issue. We are addressing the problem and will contact you if necessary. For further assistance, contact [AWS Premium Support/](https://console.aws.amazon.com/support/).  | None | 
|  failure  | RDS-EVENT-0198 |  Creation failed for custom engine version *name*. *message*  | The *message* includes details about the failure, such as missing files. | 
|  failure  | RDS-EVENT-0277 |  Failure during deletion of custom engine version *name*. *message*  | The *message* includes details about the failure. | 
|  restoring  | RDS-EVENT-0352 |  The maximum database count supported for point-in-time restore has changed.  | The *message* includes details about the event. | 

# Monitoring Amazon RDS log files
<a name="USER_LogAccess"></a>

Every RDS database engine generates logs that you can access for auditing and troubleshooting. The type of logs depends on your database engine.

You can access database logs for DB instances using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the Amazon RDS API. You can't view, watch, or download transaction logs.

**Topics**
+ [Viewing and listing database log files](USER_LogAccess.Procedural.Viewing.md)
+ [Downloading a database log file](USER_LogAccess.Procedural.Downloading.md)
+ [Watching a database log file](USER_LogAccess.Procedural.Watching.md)
+ [Publishing database logs to Amazon CloudWatch Logs](USER_LogAccess.Procedural.UploadtoCloudWatch.md)
+ [Reading log file contents using REST](DownloadCompleteDBLogFile.md)
+ [Amazon RDS for Db2 database log files](USER_LogAccess.Concepts.Db2.md)
+ [MariaDB database log files](USER_LogAccess.Concepts.MariaDB.md)
+ [Amazon RDS for Microsoft SQL Server database log files](USER_LogAccess.Concepts.SQLServer.md)
+ [MySQL database log files](USER_LogAccess.Concepts.MySQL.md)
+ [Amazon RDS for Oracle database log files](USER_LogAccess.Concepts.Oracle.md)
+ [RDS for PostgreSQL database log files](USER_LogAccess.Concepts.PostgreSQL.md)

# Viewing and listing database log files
<a name="USER_LogAccess.Procedural.Viewing"></a>

You can view database log files for your Amazon RDS DB engine by using the AWS Management Console. You can list what log files are available for download or monitoring by using the AWS CLI or Amazon RDS API. 

**Note**  
If you can't view the list of log files for an existing RDS for Oracle DB instance, reboot the instance to view the list. 

## Console
<a name="USER_LogAccess.CON"></a>

**To view a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.

1. Scroll down to the **Logs** section.

1. (Optional) Enter a search term to filter your results.

1. Choose the log that you want to view, and then choose **View**.

## AWS CLI
<a name="USER_LogAccess.CLI"></a>

To list the available database log files for a DB instance, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-log-files.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-log-files.html) command.

The following example returns a list of log files for a DB instance named `my-db-instance`.

**Example**  

```
1. aws rds describe-db-log-files --db-instance-identifier my-db-instance
```

## RDS API
<a name="USER_LogAccess.API"></a>

To list the available database log files for a DB instance, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html) action.

# Downloading a database log file
<a name="USER_LogAccess.Procedural.Downloading"></a>

You can use the AWS Management Console, AWS CLI, or API to download a database log file. 

## Console
<a name="USER_LogAccess.Procedural.Downloading.CON"></a>

**To download a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.

1. Scroll down to the **Logs** section. 

1. In the **Logs** section, choose the button next to the log that you want to download, and then choose **Download**.

1. Open the context (right-click) menu for the link provided, and then choose **Save Link As**. Enter the location where you want the log file to be saved, and then choose **Save**.  
![\[viewing log file\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/log_download2.png)

## AWS CLI
<a name="USER_LogAccess.Procedural.Downloading.CLI"></a>

To download a database log file, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html](https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html). By default, this command downloads only the latest portion of a log file. However, you can download an entire file by specifying the parameter `--starting-token 0`.

The following example shows how to download the entire contents of a log file called *log/ERROR.4* and store it in a local file called *errorlog.txt*.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds download-db-log-file-portion \
2.     --db-instance-identifier myexampledb \
3.     --starting-token 0 --output text \
4.     --log-file-name log/ERROR.4 > errorlog.txt
```
For Windows:  

```
1. aws rds download-db-log-file-portion ^
2.     --db-instance-identifier myexampledb ^
3.     --starting-token 0 --output text ^
4.     --log-file-name log/ERROR.4 > errorlog.txt
```

## RDS API
<a name="USER_LogAccess.Procedural.Downloading.API"></a>

To download a database log file, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DownloadDBLogFilePortion.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DownloadDBLogFilePortion.html) action.

# Watching a database log file
<a name="USER_LogAccess.Procedural.Watching"></a>

Watching a database log file is equivalent to tailing the file on a UNIX or Linux system. You can watch a log file by using the AWS Management Console. RDS refreshes the tail of the log every 5 seconds.

**To watch a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.  
![\[Choose the Logs & events tab\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/Monitoring_logsEvents.png)

1. In the **Logs** section, choose a log file, and then choose **Watch**.  
![\[Choose a log\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/Monitoring_LogsEvents_watch.png)

   RDS shows the tail of the log, as in the following MySQL example.  
![\[Tail of a log file\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/Monitoring_LogsEvents_watch_content.png)

# Publishing database logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.Procedural.UploadtoCloudWatch"></a>

In an on-premises database, the database logs reside on the file system. Amazon RDS doesn't provide host access to the database logs on the file system of your DB instance. For this reason, Amazon RDS lets you export database logs to [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html). With CloudWatch Logs, you can perform real-time analysis of the log data. You can also store the data in highly durable storage and manage the data with the CloudWatch Logs Agent. 

**Topics**
+ [Overview of RDS integration with CloudWatch Logs](#rds-integration-cw-logs)
+ [Deciding which logs to publish to CloudWatch Logs](#engine-specific-logs)
+ [Specifying the logs to publish to CloudWatch Logs](#integrating_cloudwatchlogs.configure)
+ [Searching and filtering your logs in CloudWatch Logs](#accessing-logs-in-cloudwatch)

## Overview of RDS integration with CloudWatch Logs
<a name="rds-integration-cw-logs"></a>

In CloudWatch Logs, a *log stream* is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. A *log group* is a group of log streams that share the same retention, monitoring, and access control settings.

Amazon RDS continuously streams your DB instance log records to a log group. For example, you have a log group `/aws/rds/instance/instance_name/log_type` for each type of log that you publish. This log group is in the same AWS Region as the database instance that generates the log.

AWS retains log data published to CloudWatch Logs for an indefinite time period unless you specify a retention period. For more information, see [Change log data retention in CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention). 

## Deciding which logs to publish to CloudWatch Logs
<a name="engine-specific-logs"></a>

Each RDS database engine supports its own set of logs. To learn about the options for your database engine, review the following topics:
+ [Publishing Db2 logs to Amazon CloudWatch Logs](USER_LogAccess.Concepts.Db2.md#USER_LogAccess.Db2.PublishtoCloudWatchLogs)
+ [Publishing MariaDB logs to Amazon CloudWatch Logs](USER_LogAccess.MariaDB.PublishtoCloudWatchLogs.md)
+ [Publishing MySQL logs to Amazon CloudWatch Logs](USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs.md)
+ [Publishing Oracle logs to Amazon CloudWatch Logs](USER_LogAccess.Concepts.Oracle.md#USER_LogAccess.Oracle.PublishtoCloudWatchLogs)
+ [Publishing PostgreSQL logs to Amazon CloudWatch Logs](USER_LogAccess.Concepts.PostgreSQL.md#USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs)
+ [Publishing SQL Server logs to Amazon CloudWatch Logs](USER_LogAccess.Concepts.SQLServer.md#USER_LogAccess.SQLServer.PublishtoCloudWatchLogs)

## Specifying the logs to publish to CloudWatch Logs
<a name="integrating_cloudwatchlogs.configure"></a>

You specify which logs to publish in the console. Make sure that you have a service-linked role in AWS Identity and Access Management (IAM). For more information about service-linked roles, see [Using service-linked roles for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md).

**To specify the logs to publish**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Do either of the following:
   + Choose **Create database**.
   + Choose a database from the list, and then choose **Modify**.

1. In **Logs exports**, choose which logs to publish.

   The following example specifies the audit log, error logs, general log, and slow query log for an RDS for MySQL DB instance.  
![\[Choose the logs to publish to CloudWatch Logs\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/AddCWLogs.png)

## Searching and filtering your logs in CloudWatch Logs
<a name="accessing-logs-in-cloudwatch"></a>

You can search for log entries that meet a specified criteria using the CloudWatch Logs console. You can access the logs either through the RDS console, which leads you to the CloudWatch Logs console, or from the CloudWatch Logs console directly.

**To search your RDS logs using the RDS console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose a DB instance.

1. Choose **Configuration**.

1. Under **Published logs**, choose the database log that you want to view.

**To search your RDS logs using the CloudWatch Logs console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Log groups**.

1. In the filter box, enter **/aws/rds**.

1. For **Log Groups**, choose the name of the log group containing the log stream to search.

1. For **Log Streams**, choose the name of the log stream to search.

1. Under **Log events**, enter the filter syntax to use.

For more information, see [Searching and filtering log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html) in the *Amazon CloudWatch Logs User Guide*. For a blog tutorial explaining how to monitor RDS logs, see [Build proactive database monitoring for Amazon RDS with Amazon CloudWatch Logs, AWS Lambda, and Amazon SNS](https://aws.amazon.com/blogs/database/build-proactive-database-monitoring-for-amazon-rds-with-amazon-cloudwatch-logs-aws-lambda-and-amazon-sns/).

# Reading log file contents using REST
<a name="DownloadCompleteDBLogFile"></a>

Amazon RDS provides a REST endpoint that allows access to DB instance log files. This is useful if you need to write an application to stream Amazon RDS log file contents.

The syntax is:

```
GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1
Content-type: application/json
host: rds.region.amazonaws.com
```

The following parameters are required:
+ `DBInstanceIdentifier`—the name of the DB instance that contains the log file you want to download.
+ `LogFileName`—the name of the log file to be downloaded.

The response contains the contents of the requested log file, as a stream.

The following example downloads the log file named *log/ERROR.6* for the DB instance named *sample-sql* in the *us-west-2* region.

```
GET /v13/downloadCompleteLogFile/sample-sql/log/ERROR.6 HTTP/1.1
host: rds.us-west-2.amazonaws.com
X-Amz-Security-Token: AQoDYXdzEIH//////////wEa0AIXLhngC5zp9CyB1R6abwKrXHVR5efnAVN3XvR7IwqKYalFSn6UyJuEFTft9nObglx4QJ+GXV9cpACkETq=
X-Amz-Date: 20140903T233749Z
X-Amz-Algorithm: AWS4-HMAC-SHA256
X-Amz-Credential: AKIADQKE4SARGYLE/20140903/us-west-2/rds/aws4_request
X-Amz-SignedHeaders: host
X-Amz-Content-SHA256: e3b0c44298fc1c229afbf4c8996fb92427ae41e4649b934de495991b7852b855
X-Amz-Expires: 86400
X-Amz-Signature: 353a4f14b3f250142d9afc34f9f9948154d46ce7d4ec091d0cdabbcf8b40c558
```

If you specify a nonexistent DB instance, the response consists of the following error:
+ `DBInstanceNotFound`—`DBInstanceIdentifier` does not refer to an existing DB instance. (HTTP status code: 404)

# Amazon RDS for Db2 database log files
<a name="USER_LogAccess.Concepts.Db2"></a>

You can access RDS for Db2 diagnostic logs and notify logs by using the Amazon RDS console, AWS CLI, or RDS API. For more information about viewing, downloading, and watching file-based database logs, see [Monitoring Amazon RDS log files](USER_LogAccess.md).

**Topics**
+ [Retention schedule](#USER_LogAccess.Concepts.Db2.Retention)
+ [Publishing Db2 logs to Amazon CloudWatch Logs](#USER_LogAccess.Db2.PublishtoCloudWatchLogs)

## Retention schedule
<a name="USER_LogAccess.Concepts.Db2.Retention"></a>

Log files are rotated each day and whenever your DB instance is restarted. The following is the retention schedule for RDS for Db2 logs on Amazon RDS. 


****  

| Log type | Retention schedule | 
| --- | --- | 
|  Diagnostic logs  |  Db2 deletes logs outside of the retention settings in the instance-level configuration. Amazon RDS sets the `diagsize` parameter to 1000.  | 
|  Notify logs  |  Db2 deletes logs outside of the retention settings in the instance-level configuration. Amazon RDS sets the `diagsize` parameter to 1000.  | 

## Publishing Db2 logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.Db2.PublishtoCloudWatchLogs"></a>

With RDS for Db2, you can publish diagnostic and notify log events directly to Amazon CloudWatch Logs. Analyze the log data with CloudWatch Logs, then use CloudWatch to create alarms and view metrics.

With CloudWatch Logs, you can do the following:
+ Store logs in highly durable storage space with a retention period that you define.
+ Search and filter log data.
+ Share log data between accounts.
+ Export logs to Amazon S3.
+ Stream data to Amazon OpenSearch Service.
+ Process log data in real time with Amazon Kinesis Data Streams. For more information, see [Working with Amazon CloudWatch Logs](https://docs.aws.amazon.com/kinesisanalytics/latest/dev/cloudwatch-logs.html) in the *Amazon Managed Service for Apache Flink for SQL Applications Developer Guide*.

 Amazon RDS publishes each RDS for Db2 database log as a separate database stream in the log group. For example, if you publish the diagnostic logs and notify logs, diagnostic data is stored in a diagnostic log stream in the `/aws/rds/instance/my_instance/diagnostic` log group, and notify log data is stored in the `/aws/rds/instance/my_instance/notify` log group.

**Note**  
Publishing RDS for Db2 logs to CloudWatch Logs isn't enabled by default. Publishing self-tuning memory manager (STMM) and optimizer statistics logs isn't supported. Publishing RDS for Db2 logs to CloudWatch Logs is supported in all Regions, except for Asia Pacific (Hong Kong).

### Console
<a name="USER_LogAccess.Db2.PublishtoCloudWatchLogs.console"></a>

**To publish RDS for Db2 logs to CloudWatch Logs from the AWS Management Console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance that you want to modify.

1. Choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

   You can choose **diag.log**, **notify.log**, or both.

1. Choose **Continue**, and then choose **Modify DB Instance** on the summary page.

### AWS CLI
<a name="USER_LogAccess.Db2.PublishtoCloudWatchLogs.CLI"></a>

To publish RDS for Db2 logs, you can use the [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command with the following parameters: 
+ `--db-instance-identifier`
+ `--cloudwatch-logs-export-configuration`

**Note**  
A change to the `--cloudwatch-logs-export-configuration` option is always applied to the DB instance immediately. Therefore, the `--apply-immediately` and `--no-apply-immediately` options have no effect.

You can also publish RDS for Db2 logs using the following commands: 
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html)

**Example**  
The following example creates an RDS for Db2 DB instance with CloudWatch Logs publishing enabled. The `--enable-cloudwatch-logs-exports` value is a JSON array of strings that can include `diag.log`, `notify.log`, or both.  
For Linux, macOS, or Unix:  

```
aws rds create-db-instance \
    --db-instance-identifier mydbinstance \
    --enable-cloudwatch-logs-exports '["diag.log","notify.log"]' \
    --db-instance-class db.m4.large \
    --engine db2-se
```
For Windows:  

```
aws rds create-db-instance ^
    --db-instance-identifier mydbinstance ^
    --enable-cloudwatch-logs-exports "[\"diag.log\",\"notify.log\"]" ^
    --db-instance-class db.m4.large ^
    --engine db2-se
```
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

**Example**  
The following example modifies an existing RDS for Db2 DB instance to publish log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `EnableLogTypes`, and its value is an array of strings that can include `diag.log`, `notify.log`, or both.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '{"EnableLogTypes":["diag.log","notify.log"]}'
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration "{\"EnableLogTypes\":[\"diag.log\",\"notify.log\"]}"
```
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

**Example**  
The following example modifies an existing RDS for Db2 DB instance to disable publishing diagnostic log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `DisableLogTypes`, and its value is an array of strings that can include `diag.log`, `notify.log`, or both.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '{"DisableLogTypes":["diag.log"]}'
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration "{\"DisableLogTypes\":[\"diag.log\"]}"
```
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

# MariaDB database log files
<a name="USER_LogAccess.Concepts.MariaDB"></a>

You can monitor the MariaDB error log, slow query log, the IAM database authentication error log, and the general log. The MariaDB error log is generated by default; you can generate the slow query and general logs by setting parameters in your DB parameter group. Amazon RDS rotates all of the MariaDB log files; the intervals for each type are given following. 

You can monitor the MariaDB logs directly through the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs. You can also access MariaDB logs by directing the logs to a database table in the main database and querying that table. You can use the mysqlbinlog utility to download a binary log. 

For more information about viewing, downloading, and watching file-based database logs, see [Monitoring Amazon RDS log files](USER_LogAccess.md).

**Topics**
+ [Accessing MariaDB error logs](USER_LogAccess.MariaDB.Errorlog.md)
+ [Accessing the MariaDB slow query and general logs](USER_LogAccess.MariaDB.Generallog.md)
+ [Publishing MariaDB logs to Amazon CloudWatch Logs](USER_LogAccess.MariaDB.PublishtoCloudWatchLogs.md)
+ [Log rotation and retention for MariaDB](USER_LogAccess.MariaDB.LogFileSize.md)
+ [Managing table-based MariaDB logs](Appendix.MariaDB.CommonDBATasks.Logs.md)
+ [Configuring MariaDB binary logging](USER_LogAccess.MariaDB.BinaryFormat.md)
+ [Accessing MariaDB binary logs](USER_LogAccess.MariaDB.Binarylog.md)
+ [Enabling MariaDB binary log annotation](USER_LogAccess.MariaDB.BinarylogAnnotation.md)

# Accessing MariaDB error logs
<a name="USER_LogAccess.MariaDB.Errorlog"></a>

The MariaDB error log is written to the `<host-name>.err` file. You can view this file by using the Amazon RDS console, You can also retrieve the log using the Amazon RDS API, Amazon RDS CLI, or AWS SDKs. The `<host-name>.err` file is flushed every 5 minutes, and its contents are appended to `mysql-error-running.log`. The `mysql-error-running.log` file is then rotated every hour and the hourly files generated during the last 24 hours are retained. Each log file has the hour it was generated (in UTC) appended to its name. The log files also have a timestamp that helps you determine when the log entries were written.

MariaDB writes to the error log only on startup, shutdown, and when it encounters errors. A DB instance can go hours or days without new entries being written to the error log. If you see no recent entries, it's because the server did not encounter an error that resulted in a log entry.

# Accessing the MariaDB slow query and general logs
<a name="USER_LogAccess.MariaDB.Generallog"></a>

You can write the MariaDB slow query log and general log to a file or database table by setting parameters in your DB parameter group. For information about creating and modifying a DB parameter group, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md). You must set these parameters before you can view the slow query log or general log in the Amazon RDS console or by using the Amazon RDS API, AWS CLI, or AWS SDKs.

You can control MariaDB logging by using the parameters in this list:
+ `slow_query_log` or `log_slow_query`: To create the slow query log, set to 1. The default is 0.
+ `general_log`: To create the general log, set to 1. The default is 0.
+ `long_query_time` or `log_slow_query_time`: To prevent fast-running queries from being logged in the slow query log, specify a value for the shortest query run time to be logged, in seconds. The default is 10 seconds; the minimum is 0. If log\$1output = FILE, you can specify a floating point value that goes to microsecond resolution. If log\$1output = TABLE, you must specify an integer value with second resolution. Only queries whose run time exceeds the `long_query_time` or `log_slow_query_time` value are logged. For example, setting `long_query_time` or `log_slow_query_time` to 0.1 prevents any query that runs for less than 100 milliseconds from being logged.
+ `log_queries_not_using_indexes`: To log all queries that do not use an index to the slow query log, set this parameter to 1. The default is 0. Queries that do not use an index are logged even if their run time is less than the value of the `long_query_time` parameter.
+ `log_output option`: You can specify one of the following options for the `log_output` parameter:
  + **TABLE** (default)– Write general queries to the `mysql.general_log` table, and slow queries to the `mysql.slow_log` table. 
  + **FILE**– Write both general and slow query logs to the file system. Log files are rotated hourly. 
  + **NONE**– Disable logging.

When logging is enabled, Amazon RDS rotates table logs or deletes log files at regular intervals. This measure is a precaution to reduce the possibility of a large log file either blocking database use or affecting performance. `FILE` and `TABLE` logging approach rotation and deletion as follows:
+ When `FILE` logging is enabled, log files are examined every hour and log files older than 24 hours are deleted. In some cases, the remaining combined log file size after the deletion might exceed the threshold of 2 percent of a DB instance's allocated space. In these cases, the largest log files are deleted until the log file size no longer exceeds the threshold. 
+ When `TABLE` logging is enabled, in some cases log tables are rotated every 24 hours. This rotation occurs if the space used by the table logs is more than 20 percent of the allocated storage space. It also occurs if the size of all logs combined is greater than 10 GB. If the amount of space used for a DB instance is greater than 90 percent of the DB instance's allocated storage space, the thresholds for log rotation are reduced. Log tables are then rotated if the space used by the table logs is more than 10 percent of the allocated storage space. They're also rotated if the size of all logs combined is greater than 5 GB.

  When log tables are rotated, the current log table is copied to a backup log table and the entries in the current log table are removed. If the backup log table already exists, then it is deleted before the current log table is copied to the backup. You can query the backup log table if needed. The backup log table for the `mysql.general_log` table is named `mysql.general_log_backup`. The backup log table for the `mysql.slow_log` table is named `mysql.slow_log_backup`.

  You can rotate the `mysql.general_log` table by calling the `mysql.rds_rotate_general_log` procedure. You can rotate the `mysql.slow_log` table by calling the `mysql.rds_rotate_slow_log` procedure.

  Table logs are rotated during a database version upgrade.

Amazon RDS records both `TABLE` and `FILE` log rotation in an Amazon RDS event and sends you a notification.

To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs, set the `log_output` parameter to FILE. Like the MariaDB error log, these log files are rotated hourly. The log files that were generated during the previous 24 hours are retained.

For more information about the slow query and general logs, go to the following topics in the MariaDB documentation:
+ [Slow query log](http://mariadb.com/kb/en/mariadb/slow-query-log/)
+ [General query log](http://mariadb.com/kb/en/mariadb/general-query-log/)

# Publishing MariaDB logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.MariaDB.PublishtoCloudWatchLogs"></a>

You can configure your MariaDB DB instance to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage. 

Amazon RDS publishes each MariaDB database log as a separate database stream in the log group. For example, suppose that you configure the export function to include the slow query log. Then slow query data is stored in a slow query log stream in the `/aws/rds/instance/my_instance/slowquery` log group.

The error log is enabled by default. The following table summarizes the requirements for the other MariaDB logs.


| Log | Requirement | 
| --- | --- | 
|  Audit log  |  The DB instance must use a custom option group with the `MARIADB_AUDIT_PLUGIN` option.  | 
|  General log  |  The DB instance must use a custom parameter group with the parameter setting `general_log = 1` to enable the general log.  | 
|  Slow query log  |  The DB instance must use a custom parameter group with the parameter setting `slow_query_log = 1` or `log_slow_query = 1` to enable the slow query log.  | 
|  IAM database authentication error log  |  You must enable the log type `iam-db-auth-error` for a DB instance by creating or modifying a DB instance.  | 
|  Log output  |  The DB instance must use a custom parameter group with the parameter setting `log_output = FILE` to write logs to the file system and publish them to CloudWatch Logs.  | 

## Console
<a name="USER_LogAccess.MariaDB.PublishtoCloudWatchLogs.CON"></a>

**To publish MariaDB logs to CloudWatch Logs from the console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance that you want to modify.

1. Choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

1. Choose **Continue**, and then choose **Modify DB Instance** on the summary page.

## AWS CLI
<a name="USER_LogAccess.MariaDB.PublishtoCloudWatchLogs.CLI"></a>

You can publish a MariaDB logs with the AWS CLI. You can call the [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command with the following parameters: 
+ `--db-instance-identifier`
+ `--cloudwatch-logs-export-configuration`

**Note**  
A change to the `--cloudwatch-logs-export-configuration` option is always applied to the DB instance immediately. Therefore, the `--apply-immediately` and `--no-apply-immediately` options have no effect.

You can also publish MariaDB logs by calling the following AWS CLI commands: 
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-s3.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-s3.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html)

Run one of these AWS CLI commands with the following options: 
+ `--db-instance-identifier`
+ `--enable-cloudwatch-logs-exports`
+ `--db-instance-class`
+ `--engine`

Other options might be required depending on the AWS CLI command you run.

**Example**  
The following example modifies an existing MariaDB DB instance to publish log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `EnableLogTypes`, and its value is an array of strings with any combination of `audit`, `error`, `general`, and `slowquery`.  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit","error","general","slowquery"]}'
```
For Windows:  

```
1. aws rds modify-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit","error","general","slowquery"]}'
```

**Example**  
The following command creates a MariaDB DB instance and publishes log files to CloudWatch Logs. The `--enable-cloudwatch-logs-exports` value is a JSON array of strings. The strings can be any combination of `audit`, `error`, `general`, and `slowquery`.  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' \
4.     --db-instance-class db.m4.large \
5.     --engine mariadb
```
For Windows:  

```
1. aws rds create-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' ^
4.     --db-instance-class db.m4.large ^
5.     --engine mariadb
```

## RDS API
<a name="USER_LogAccess.MariaDB.PublishtoCloudWatchLogs.API"></a>

You can publish MariaDB logs with the RDS API. You can call the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) operation with the following parameters: 
+ `DBInstanceIdentifier`
+ `CloudwatchLogsExportConfiguration`

**Note**  
A change to the `CloudwatchLogsExportConfiguration` parameter is always applied to the DB instance immediately. Therefore, the `ApplyImmediately` parameter has no effect.

You can also publish MariaDB logs by calling the following RDS API operations: 
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html)

Run one of these RDS API operations with the following parameters: 
+ `DBInstanceIdentifier`
+ `EnableCloudwatchLogsExports`
+ `Engine`
+ `DBInstanceClass`

Other parameters might be required depending on the AWS CLI command you run.

# Log rotation and retention for MariaDB
<a name="USER_LogAccess.MariaDB.LogFileSize"></a>

When logging is enabled, Amazon RDS rotates table logs or deletes log files at regular intervals. This measure is a precaution to reduce the possibility of a large log file either blocking database use or affecting performance.

The MariaDB slow query log, error log, and the general log file sizes are constrained to no more than 2 percent of the allocated storage space for a DB instance. To maintain this threshold, logs are automatically rotated every hour and log files older than 24 hours are removed. If the combined log file size exceeds the threshold after removing old log files, then the largest log files are deleted until the log file size no longer exceeds the threshold.

Amazon RDS rotates IAM database authentication error log files larger than 10 MB. Amazon RDS removes IAM database authentication error log files that are older than five days or larger than 100 MB.

# Managing table-based MariaDB logs
<a name="Appendix.MariaDB.CommonDBATasks.Logs"></a>

You can direct the general and slow query logs to tables on the DB instance. To do so, create a DB parameter group and set the `log_output` server parameter to `TABLE`. General queries are then logged to the `mysql.general_log` table, and slow queries are logged to the `mysql.slow_log` table. You can query the tables to access the log information. Enabling this logging increases the amount of data written to the database, which can degrade performance.

Both the general log and the slow query logs are disabled by default. To enable logging to tables, you must also set the following server parameters to `1`:
+ `general_log`
+ `slow_query_log` or `log_slow_query`

Log tables keep growing until the respective logging activities are turned off by resetting the appropriate parameter to `0`. A large amount of data often accumulates over time, which can use up a considerable percentage of your allocated storage space. Amazon RDS doesn't allow you to truncate the log tables, but you can move their contents. Rotating a table saves its contents to a backup table and then creates a new empty log table. You can manually rotate the log tables with the following command line procedures, where the command prompt is indicated by `PROMPT>`: 

```
PROMPT> CALL mysql.rds_rotate_slow_log;
PROMPT> CALL mysql.rds_rotate_general_log;
```

 To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in succession. 

# Configuring MariaDB binary logging
<a name="USER_LogAccess.MariaDB.BinaryFormat"></a>

The *binary log* is a set of log files that contain information about data modifications made to a MariaDB server instance. The binary log contains information such as the following:
+ Events that describe database changes such as table creation or row modifications
+ Information about the duration of each statement that updated data
+ Events for statements that could have updated data but didn't

The binary log records statements that are sent during replication. It is also required for some recovery operations. For more information, see [Binary Log](https://mariadb.com/kb/en/binary-log/) in the MariaDB documentation.

The automated backups feature determines whether binary logging is turned on or off for MariaDB. You have the following options:

Turn binary logging on  
Set the backup retention period to a positive nonzero value.

Turn binary logging off  
Set the backup retention period to zero.

For more information, see [Enabling automated backups](USER_WorkingWithAutomatedBackups.Enabling.md).

MariaDB on Amazon RDS supports the *row-based*, *statement-based*, and *mixed* binary logging formats. The default binary logging format is *mixed*. For details on the different MariaDB binary log formats, see [Binary Log Formats](http://mariadb.com/kb/en/mariadb/binary-log-formats/) in the MariaDB documentation.

If you plan to use replication, the binary logging format is important. This is because it determines the record of data changes that is recorded in the source and sent to the replication targets. For information about the advantages and disadvantages of different binary logging formats for replication, see [Advantages and Disadvantages of Statement-Based and Row-Based Replication](https://dev.mysql.com/doc/refman/5.7/en/replication-sbr-rbr.html) in the MySQL documentation.

**Important**  
Setting the binary logging format to row-based can result in very large binary log files. Large binary log files reduce the amount of storage available for a DB instance. They also can increase the amount of time to perform a restore operation of a DB instance.  
Statement-based replication can cause inconsistencies between the source DB instance and a read replica. For more information, see [Unsafe Statements for Statement-based Replication](https://mariadb.com/kb/en/library/unsafe-statements-for-statement-based-replication/) in the MariaDB documentation.  
Enabling binary logging increases the number of write disk I/O operations to the DB instance. You can monitor IOPS usage with the `WriteIOPS` CloudWatch metric.

**To set the MariaDB binary logging format**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the parameter group that is used by the DB instance that you want to modify.

   You can't modify a default parameter group. If the DB instance is using a default parameter group, create a new parameter group and associate it with the DB instance.

   For more information on DB parameter groups, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md).

1. For **Parameter group actions**, choose **Edit**.

1. Set the `binlog_format` parameter to the binary logging format of your choice (**ROW**, **STATEMENT**, or **MIXED**).

   You can turn off binary logging by setting the backup retention period of a DB instance to zero, but this disables daily automated backups. Disabling automated backups turns off or disables the `log_bin` session variable. This disables binary logging on the RDS for MariaDB DB instance, which in turn resets the `binlog_format` session variable to the default value of `ROW` in the database. We recommend that you don't disable backups. For more information about the **Backup retention period** setting, see [Settings for DB instances](USER_ModifyInstance.Settings.md).

1. Choose **Save changes** to save the updates to the DB parameter group.

Because the `binlog_format` parameter is dynamic in RDS for MariaDB, you don't need to reboot the DB instance for the changes to apply. 

**Important**  
Changing a DB parameter group affects all DB instances that use that parameter group. If you want to specify different binary logging formats for different MariaDB DB instances in an AWS Region, the DB instances must use different DB parameter groups. These parameter groups identify different logging formats. Assign the appropriate DB parameter group to the each DB instance.

# Accessing MariaDB binary logs
<a name="USER_LogAccess.MariaDB.Binarylog"></a>

You can use the mysqlbinlog utility to download binary logs in text format from MariaDB DB instances. The binary log is downloaded to your local computer. For more information about using the mysqlbinlog utility, go to [Using mysqlbinlog](http://mariadb.com/kb/en/mariadb/using-mysqlbinlog/) in the MariaDB documentation.

 To run the mysqlbinlog utility against an Amazon RDS instance, use the following options: 
+  Specify the `--read-from-remote-server` option. 
+  `--host`: Specify the DNS name from the endpoint of the instance. 
+  `--port`: Specify the port used by the instance. 
+  `--user`: Specify a MariaDB user that has been granted the replication slave permission. 
+  `--password`: Specify the password for the user, or omit a password value so the utility prompts you for a password. 
+  `--result-file`: Specify the local file that receives the output. 
+ Specify the names of one or more binary log files. To get a list of the available logs, use the SQL command SHOW BINARY LOGS. 

For more information about mysqlbinlog options, go to [mysqlbinlog options](http://mariadb.com/kb/en/mariadb/mysqlbinlog-options/) in the MariaDB documentation. 

 The following is an example: 

For Linux, macOS, or Unix:

```
mysqlbinlog \
    --read-from-remote-server \
    --host=mariadbinstance1.1234abcd.region.rds.amazonaws.com \
    --port=3306  \
    --user ReplUser \
    --password <password> \
    --result-file=/tmp/binlog.txt
```

For Windows:

```
mysqlbinlog ^
    --read-from-remote-server ^
    --host=mariadbinstance1.1234abcd.region.rds.amazonaws.com ^
    --port=3306  ^
    --user ReplUser ^
    --password <password> ^
    --result-file=/tmp/binlog.txt
```

Amazon RDS normally purges a binary log as soon as possible. However, the binary log must still be available on the instance to be accessed by mysqlbinlog. To specify the number of hours for RDS to retain binary logs, use the `mysql.rds_set_configuration` stored procedure. Specify a period with enough time for you to download the logs. After you set the retention period, monitor storage usage for the DB instance to ensure that the retained binary logs don't take up too much storage.

The following example sets the retention period to 1 day.

```
call mysql.rds_set_configuration('binlog retention hours', 24); 
```

To display the current setting, use the `mysql.rds_show_configuration` stored procedure.

```
call mysql.rds_show_configuration; 
```

# Enabling MariaDB binary log annotation
<a name="USER_LogAccess.MariaDB.BinarylogAnnotation"></a>

In a MariaDB DB instance, you can use the `Annotate_rows` event to annotate a row event with a copy of the SQL query that caused the row event. This approach provides similar functionality to enabling the `binlog_rows_query_log_events` parameter on an RDS for MySQL DB instance.

You can enable binary log annotations globally by creating a custom parameter group and setting the `binlog_annotate_row_events` parameter to **1**. You can also enable annotations at the session level, by calling `SET SESSION binlog_annotate_row_events = 1`. Use the `replicate_annotate_row_events` to replicate binary log annotations to the replica instance if binary logging is enabled on it. No special privileges are required to use these settings.

The following is an example of a row-based transaction in MariaDB. The use of row-based logging is triggered by setting the transaction isolation level to read-committed.

```
CREATE DATABASE IF NOT EXISTS test;
USE test;
CREATE TABLE square(x INT PRIMARY KEY, y INT NOT NULL) ENGINE = InnoDB;
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN
INSERT INTO square(x, y) VALUES(5, 5 * 5);
COMMIT;
```

Without annotations, the binary log entries for the transaction look like the following:

```
BEGIN
/*!*/;
# at 1163
# at 1209
#150922  7:55:57 server id 1855786460  end_log_pos 1209         Table_map: `test`.`square` mapped to number 76
#150922  7:55:57 server id 1855786460  end_log_pos 1247         Write_rows: table id 76 flags: STMT_END_F
### INSERT INTO `test`.`square`
### SET
###   @1=5
###   @2=25
# at 1247
#150922  7:56:01 server id 1855786460  end_log_pos 1274         Xid = 62
COMMIT/*!*/;
```

The following statement enables session-level annotations for this same transaction, and disables them after committing the transaction:

```
CREATE DATABASE IF NOT EXISTS test;
USE test;
CREATE TABLE square(x INT PRIMARY KEY, y INT NOT NULL) ENGINE = InnoDB;
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET SESSION binlog_annotate_row_events = 1;
BEGIN;
INSERT INTO square(x, y) VALUES(5, 5 * 5);
COMMIT;
SET SESSION binlog_annotate_row_events = 0;
```

With annotations, the binary log entries for the transaction look like the following:

```
BEGIN
/*!*/;
# at 423
# at 483
# at 529
#150922  8:04:24 server id 1855786460  end_log_pos 483  Annotate_rows:
#Q> INSERT INTO square(x, y) VALUES(5, 5 * 5)
#150922  8:04:24 server id 1855786460  end_log_pos 529  Table_map: `test`.`square` mapped to number 76
#150922  8:04:24 server id 1855786460  end_log_pos 567  Write_rows: table id 76 flags: STMT_END_F
### INSERT INTO `test`.`square`
### SET
###   @1=5
###   @2=25
# at 567
#150922  8:04:26 server id 1855786460  end_log_pos 594  Xid = 88
COMMIT/*!*/;
```

# Amazon RDS for Microsoft SQL Server database log files
<a name="USER_LogAccess.Concepts.SQLServer"></a>

You can access Microsoft SQL Server error logs, agent logs, trace files, and dump files by using the Amazon RDS console, AWS CLI, or RDS API. For more information about viewing, downloading, and watching file-based database logs, see [Monitoring Amazon RDS log files](USER_LogAccess.md).

## Retention schedule
<a name="USER_LogAccess.Concepts.SQLServer.Retention"></a>

Log files are rotated each day and whenever your DB instance is restarted. The following is the retention schedule for Microsoft SQL Server logs on Amazon RDS. 


****  

| Log type | Retention schedule | 
| --- | --- | 
|  Error logs  |  A maximum of 30 error logs are retained. Amazon RDS might delete error logs older than 7 days.   | 
|  Agent logs  |  A maximum of 10 agent logs are retained. Amazon RDS might delete agent logs older than 7 days.   | 
|  Trace files  |  Trace files are retained according to the trace file retention period of your DB instance. The default trace file retention period is 7 days. To modify the trace file retention period for your DB instance, see [Setting the retention period for trace and dump files](Appendix.SQLServer.CommonDBATasks.TraceFiles.md#Appendix.SQLServer.CommonDBATasks.TraceFiles.PurgeTraceFiles).   | 
|  Dump files  |  Dump files are retained according to the dump file retention period of your DB instance. The default dump file retention period is 7 days. To modify the dump file retention period for your DB instance, see [Setting the retention period for trace and dump files](Appendix.SQLServer.CommonDBATasks.TraceFiles.md#Appendix.SQLServer.CommonDBATasks.TraceFiles.PurgeTraceFiles).   | 

## Viewing the SQL Server error log by using the rds\$1read\$1error\$1log procedure
<a name="USER_LogAccess.Concepts.SQLServer.Proc"></a>

You can use the Amazon RDS stored procedure `rds_read_error_log` to view error logs and agent logs. For more information, see [Viewing error and agent logs](Appendix.SQLServer.CommonDBATasks.Logs.md#Appendix.SQLServer.CommonDBATasks.Logs.SP). 

## Publishing SQL Server logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.SQLServer.PublishtoCloudWatchLogs"></a>

With Amazon RDS for SQL Server, you can publish error and agent log events directly to Amazon CloudWatch Logs. Analyze the log data with CloudWatch Logs, then use CloudWatch to create alarms and view metrics.

With CloudWatch Logs, you can do the following:
+ Store logs in highly durable storage space with a retention period that you define.
+ Search and filter log data.
+ Share log data between accounts.
+ Export logs to Amazon S3.
+ Stream data to Amazon OpenSearch Service.
+ Process log data in real time with Amazon Kinesis Data Streams. For more information, see [Working with Amazon CloudWatch Logs](https://docs.aws.amazon.com/kinesisanalytics/latest/dev/cloudwatch-logs.html) in the *Amazon Managed Service for Apache Flink for SQL Applications Developer Guide*.

 Amazon RDS publishes each SQL Server database log as a separate database stream in the log group. For example, if you publish the agent logs and error logs, error data is stored in an error log stream in the `/aws/rds/instance/my_instance.node1/error` log group, and agent log data is stored in the `/aws/rds/instance/my_instance.node1/agent` log group.

For Multi-AZ DB instances, Amazon RDS publishes the database log as two separate streams in the log group. For example, if you publish the error logs, the error data is stored in the error log streams `/aws/rds/instance/my_instance.node1/error` and `/aws/rds/instance/my_instance.node2/error` respectively. The log streams don't change during a failover and the error log stream of each node can contain error logs from primary or secondary instance. With Multi-AZ, a log stream is automatically created for `/aws/rds/instance/my_instance/rds-events` to store event data such as DB instance failovers.

**Note**  
Publishing SQL Server logs to CloudWatch Logs isn't enabled by default. Publishing trace and dump files isn't supported. Publishing SQL Server logs to CloudWatch Logs is supported in all regions.

### Console
<a name="USER_LogAccess.SQLServer.PublishtoCloudWatchLogs.console"></a>

**To publish SQL Server DB logs to CloudWatch Logs from the AWS Management Console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance that you want to modify.

1. Choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

   You can choose **Agent log**, **Error log**, or both.

1. Choose **Continue**, and then choose **Modify DB Instance** on the summary page.

### AWS CLI
<a name="USER_LogAccess.SQLServer.PublishtoCloudWatchLogs.CLI"></a>

To publish SQL Server logs, you can use the [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command with the following parameters: 
+ `--db-instance-identifier`
+ `--cloudwatch-logs-export-configuration`

**Note**  
A change to the `--cloudwatch-logs-export-configuration` option is always applied to the DB instance immediately. Therefore, the `--apply-immediately` and `--no-apply-immediately` options have no effect.

You can also publish SQL Server logs using the following commands: 
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html)

**Example**  
The following example creates an SQL Server DB instance with CloudWatch Logs publishing enabled. The `--enable-cloudwatch-logs-exports` value is a JSON array of strings that can include `error`, `agent`, or both.  
For Linux, macOS, or Unix:  

```
aws rds create-db-instance \
    --db-instance-identifier mydbinstance \
    --enable-cloudwatch-logs-exports '["error","agent"]' \
    --db-instance-class db.m4.large \
    --engine sqlserver-se
```
For Windows:  

```
aws rds create-db-instance ^
    --db-instance-identifier mydbinstance ^
    --enable-cloudwatch-logs-exports "[\"error\",\"agent\"]" ^
    --db-instance-class db.m4.large ^
    --engine sqlserver-se
```
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

**Example**  
The following example modifies an existing SQL Server DB instance to publish log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `EnableLogTypes`, and its value is an array of strings that can include `error`, `agent`, or both.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '{"EnableLogTypes":["error","agent"]}'
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration "{\"EnableLogTypes\":[\"error\",\"agent\"]}"
```
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

**Example**  
The following example modifies an existing SQL Server DB instance to disable publishing agent log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `DisableLogTypes`, and its value is an array of strings that can include `error`, `agent`, or both.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '{"DisableLogTypes":["agent"]}'
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration "{\"DisableLogTypes\":[\"agent\"]}"
```
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

# MySQL database log files
<a name="USER_LogAccess.Concepts.MySQL"></a>

You can monitor the MySQL logs directly through the Amazon RDS console, Amazon RDS API, AWS CLI, or AWS SDKs. You can also access MySQL logs by directing the logs to a database table in the main database and querying that table. You can use the mysqlbinlog utility to download a binary log. 

For more information about viewing, downloading, and watching file-based database logs, see [Monitoring Amazon RDS log files](USER_LogAccess.md).

**Topics**
+ [Overview of RDS for MySQL database logs](USER_LogAccess.MySQL.LogFileSize.md)
+ [Publishing MySQL logs to Amazon CloudWatch Logs](USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs.md)
+ [Sending MySQL log output to tables](Appendix.MySQL.CommonDBATasks.Logs.md)
+ [Configuring RDS for MySQL binary logging for Single-AZ databases](USER_LogAccess.MySQL.BinaryFormat.md)
+ [Configuring MySQL binary logging for Multi-AZ DB clusters](USER_Binlog.MultiAZ.md)
+ [Accessing MySQL binary logs](USER_LogAccess.MySQL.Binarylog.md)

# Overview of RDS for MySQL database logs
<a name="USER_LogAccess.MySQL.LogFileSize"></a>

You can monitor the following types of RDS for MySQL log files:
+ Error log
+ Slow query log
+ General log
+ Audit log
+ Instance log
+ IAM database authentication error log

The RDS for MySQL error log is generated by default. You can generate the slow query and general logs by setting parameters in your DB parameter group.

**Topics**
+ [RDS for MySQL error logs](#USER_LogAccess.MySQL.Errorlog)
+ [RDS for MySQL slow query and general logs](#USER_LogAccess.MySQL.Generallog)
+ [MySQL audit log](#USER_LogAccess.MySQL.Auditlog)
+ [Log rotation and retention for RDS for MySQL](#USER_LogAccess.MySQL.LogFileSize.retention)
+ [Size limits on redo logs](#USER_LogAccess.MySQL.LogFileSize.RedoLogs)

## RDS for MySQL error logs
<a name="USER_LogAccess.MySQL.Errorlog"></a>

RDS for MySQL writes errors in the `mysql-error.log` file. Each log file has the hour it was generated (in UTC) appended to its name. The log files also have a timestamp that helps you determine when the log entries were written.

RDS for MySQL writes to the error log only on startup, shutdown, and when it encounters errors. A DB instance can go hours or days without new entries being written to the error log. If you see no recent entries, it's because the server didn't encounter an error that would result in a log entry.

By design, the error logs are filtered so that only unexpected events such as errors are shown. However, the error logs also contain some additional database information, for example query progress, which isn't shown. Therefore, even without any actual errors the size of the error logs might increase because of ongoing database activities. And while you might see a certain size in bytes or kilobytes for the error logs in the AWS Management Console, they might have 0 bytes when you download them.

RDS for MySQL writes `mysql-error.log` to disk every 5 minutes. It appends the contents of the log to `mysql-error-running.log`.

RDS for MySQL rotates the `mysql-error-running.log` file every hour. It retains the logs generated during the last two weeks.

**Note**  
The log retention period is different between Amazon RDS and Aurora.

## RDS for MySQL slow query and general logs
<a name="USER_LogAccess.MySQL.Generallog"></a>

You can write the RDS for MySQL slow query log and the general log to a file or a database table. To do so, set parameters in your DB parameter group. For information about creating and modifying a DB parameter group, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md). You must set these parameters before you can view the slow query log or general log in the Amazon RDS console or by using the Amazon RDS API, Amazon RDS CLI, or AWS SDKs.

You can control RDS for MySQL logging by using the parameters in this list:
+ `slow_query_log`: To create the slow query log, set to 1. The default is 0.
+ `general_log`: To create the general log, set to 1. The default is 0.
+ `long_query_time`: To prevent fast-running queries from being logged in the slow query log, specify a value for the shortest query runtime to be logged, in seconds. The default is 10 seconds; the minimum is 0. If log\$1output = FILE, you can specify a floating point value that goes to microsecond resolution. If log\$1output = TABLE, you must specify an integer value with second resolution. Only queries whose runtime exceeds the `long_query_time` value are logged. For example, setting `long_query_time` to 0.1 prevents any query that runs for less than 100 milliseconds from being logged.
+ `log_queries_not_using_indexes`: To log all queries that do not use an index to the slow query log, set to 1. Queries that don't use an index are logged even if their runtime is less than the value of the `long_query_time` parameter. The default is 0.
+ `log_output option`: You can specify one of the following options for the `log_output` parameter. 
  + **TABLE** (default) – Write general queries to the `mysql.general_log` table, and slow queries to the `mysql.slow_log` table.
  + **FILE** – Write both general and slow query logs to the file system.
  + **NONE** – Disable logging.

For slow query data to appear in Amazon CloudWatch Logs, the following conditions must be met:
+ CloudWatch Logs must be configured to include slow query logs.
+ `slow_query_log` must be enabled.
+ `log_output` must be set to `FILE`.
+ The query must take longer than the time configured for `long_query_time`.

For more information about the slow query and general logs, go to the following topics in the MySQL documentation:
+ [The slow query log](https://dev.mysql.com/doc/refman/8.0/en/slow-query-log.html)
+ [The general query log](https://dev.mysql.com/doc/refman/8.0/en/query-log.html)

## MySQL audit log
<a name="USER_LogAccess.MySQL.Auditlog"></a>

To access the audit log, the DB instance must use a custom option group with the `MARIADB_AUDIT_PLUGIN` option. For more information, see [MariaDB Audit Plugin support for MySQL](Appendix.MySQL.Options.AuditPlugin.md).

## Log rotation and retention for RDS for MySQL
<a name="USER_LogAccess.MySQL.LogFileSize.retention"></a>

When logging is enabled, Amazon RDS rotates table logs or deletes log files at regular intervals. This measure is a precaution to reduce the possibility of a large log file either blocking database use or affecting performance. RDS for MySQL handles rotation and deletion as follows:
+ The MySQL slow query log, error log, and the general log file sizes are constrained to no more than 2 percent of the allocated storage space for a DB instance. To maintain this threshold, logs are automatically rotated every hour. MySQL removes log files more than two weeks old. If the combined log file size exceeds the threshold after removing old log files, then the oldest log files are deleted until the log file size no longer exceeds the threshold.
+ When `FILE` logging is enabled, log files are examined every hour and log files more than two weeks old are deleted. In some cases, the remaining combined log file size after the deletion might exceed the threshold of 2 percent of a DB instance's allocated space. In these cases, the oldest log files are deleted until the log file size no longer exceeds the threshold.
+ When `TABLE` logging is enabled, in some cases log tables are rotated every 24 hours. This rotation occurs if the space used by the table logs is more than 20 percent of the allocated storage space. It also occurs if the size of all logs combined is greater than 10 GB. If the amount of space used for a DB instance is greater than 90 percent of the DB instance's allocated storage space, then the thresholds for log rotation are reduced. Log tables are then rotated if the space used by the table logs is more than 10 percent of the allocated storage space. They're also rotated if the size of all logs combined is greater than 5 GB. You can subscribe to the `low storage` event category to be notified when log tables are rotated to free up space. For more information, see [Working with Amazon RDS event notification](USER_Events.md).

  When log tables are rotated, the current log table is first copied to a backup log table. Then the entries in the current log table are removed. If the backup log table already exists, then it is deleted before the current log table is copied to the backup. You can query the backup log table if needed. The backup log table for the `mysql.general_log` table is named `mysql.general_log_backup`. The backup log table for the `mysql.slow_log` table is named `mysql.slow_log_backup`.

  You can rotate the `mysql.general_log` table by calling the `mysql.rds_rotate_general_log` procedure. You can rotate the `mysql.slow_log` table by calling the `mysql.rds_rotate_slow_log` procedure.

  Table logs are rotated during a database version upgrade.

To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs, set the `log_output` parameter to FILE. Like the MySQL error log, these log files are rotated hourly. The log files that were generated during the previous two weeks are retained. Note that the retention period is different between Amazon RDS and Aurora.

## Size limits on redo logs
<a name="USER_LogAccess.MySQL.LogFileSize.RedoLogs"></a>

For RDS for MySQL version 8.0.32 and lower, the default value of this parameter is 256 MB. This amount is derived by multiplying the default value of the `innodb_log_file_size` parameter (128 MB) by the default value of the `innodb_log_files_in_group` parameter (2). For more information, see [Best practices for configuring parameters for Amazon RDS for MySQL, part 1: Parameters related to performance](https://aws.amazon.com/blogs/database/best-practices-for-configuring-parameters-for-amazon-rds-for-mysql-part-1-parameters-related-to-performance/). 

For RDS for MySQL version 8.0.33 and higher minor versions, Amazon RDS uses the `innodb_redo_log_capacity` parameter instead of the `innodb_log_file_size` parameter. The Amazon RDS default value of the `innodb_redo_log_capacity` parameter is 2 GB. For more information, see [ Changes in MySQL 8.0.30](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-30.html) in the MySQL documentation.

Starting with MySQL 8.4, Amazon RDS enables the `innodb_dedicated_server` parameter by default. With the `innodb_dedicated_server` parameter, the database engine calculates the `innodb_buffer_pool_size` and `innodb_redo_log_capacity` parameters. For more information, see [Configuring buffer pool size and redo log capacity in MySQL 8.4](Appendix.MySQL.CommonDBATasks.Config.Size.8.4.md).

# Publishing MySQL logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.MySQLDB.PublishtoCloudWatchLogs"></a>

You can configure your MySQL DB instance to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage. 

Amazon RDS publishes each MySQL database log as a separate database stream in the log group. For example, if you configure the export function to include the slow query log, slow query data is stored in a slow query log stream in the `/aws/rds/instance/my_instance/slowquery` log group. 

The error log is enabled by default. The following table summarizes the requirements for the other MySQL logs.


| Log | Requirement | 
| --- | --- | 
|  Audit log  |  The DB instance must use a custom option group with the `MARIADB_AUDIT_PLUGIN` option.  | 
|  General log  |  The DB instance must use a custom parameter group with the parameter setting `general_log = 1` to enable the general log.  | 
|  Slow query log  |  The DB instance must use a custom parameter group with the parameter setting `slow_query_log = 1` to enable the slow query log.  | 
|  IAM database authentication error log  |  You must enable the log type `iam-db-auth-error` for a DB instance by creating or modifying a DB instance.  | 
|  Log output  |  The DB instance must use a custom parameter group with the parameter setting `log_output = FILE` to write logs to the file system and publish them to CloudWatch Logs.  | 

## Console
<a name="USER_LogAccess.MySQL.PublishtoCloudWatchLogs.CON"></a>

**To publish MySQL logs to CloudWatch Logs using the console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance that you want to modify.

1. Choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

1. Choose **Continue**, and then choose **Modify DB Instance** on the summary page.

## AWS CLI
<a name="USER_LogAccess.MySQL.PublishtoCloudWatchLogs.CLI"></a>

 You can publish MySQL logs with the AWS CLI. You can call the [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command with the following parameters: 
+ `--db-instance-identifier`
+ `--cloudwatch-logs-export-configuration`

**Note**  
A change to the `--cloudwatch-logs-export-configuration` option is always applied to the DB instance immediately. Therefore, the `--apply-immediately` and `--no-apply-immediately` options have no effect.

You can also publish MySQL logs by calling the following AWS CLI commands: 
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-s3.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-s3.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html)

Run one of these AWS CLI commands with the following options: 
+ `--db-instance-identifier`
+ `--enable-cloudwatch-logs-exports`
+ `--db-instance-class`
+ `--engine`

Other options might be required depending on the AWS CLI command you run.

**Example**  
The following example modifies an existing MySQL DB instance to publish log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `EnableLogTypes`, and its value is an array of strings with any combination of `audit`, `error`, `general`, and `slowquery`.  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit","error","general","slowquery"]}'
```
For Windows:  

```
1. aws rds modify-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit","error","general","slowquery"]}'
```

**Example**  
The following example creates a MySQL DB instance and publishes log files to CloudWatch Logs. The `--enable-cloudwatch-logs-exports` value is a JSON array of strings. The strings can be any combination of `audit`, `error`, `general`, and `slowquery`.  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' \
4.     --db-instance-class db.m4.large \
5.     --engine MySQL
```
For Windows:  

```
1. aws rds create-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --enable-cloudwatch-logs-exports '["audit","error","general","slowquery"]' ^
4.     --db-instance-class db.m4.large ^
5.     --engine MySQL
```

## RDS API
<a name="USER_LogAccess.MySQL.PublishtoCloudWatchLogs.API"></a>

You can publish MySQL logs with the RDS API. You can call the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) action with the following parameters: 
+ `DBInstanceIdentifier`
+ `CloudwatchLogsExportConfiguration`

**Note**  
A change to the `CloudwatchLogsExportConfiguration` parameter is always applied to the DB instance immediately. Therefore, the `ApplyImmediately` parameter has no effect.

You can also publish MySQL logs by calling the following RDS API operations: 
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html)

Run one of these RDS API operations with the following parameters: 
+ `DBInstanceIdentifier`
+ `EnableCloudwatchLogsExports`
+ `Engine`
+ `DBInstanceClass`

Other parameters might be required depending on the AWS CLI command you run.

# Sending MySQL log output to tables
<a name="Appendix.MySQL.CommonDBATasks.Logs"></a>

You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter group and setting the `log_output` server parameter to `TABLE`. General queries are then logged to the `mysql.general_log` table, and slow queries are logged to the `mysql.slow_log` table. You can query the tables to access the log information. Enabling this logging increases the amount of data written to the database, which can degrade performance.

Both the general log and the slow query logs are disabled by default. In order to enable logging to tables, you must also set the `general_log` and `slow_query_log` server parameters to `1`.

Log tables keep growing until the respective logging activities are turned off by resetting the appropriate parameter to `0`. A large amount of data often accumulates over time, which can use up a considerable percentage of your allocated storage space. Amazon RDS doesn't allow you to truncate the log tables, but you can move their contents. Rotating a table saves its contents to a backup table and then creates a new empty log table. You can manually rotate the log tables with the following command line procedures, where the command prompt is indicated by `PROMPT>`: 

```
PROMPT> CALL mysql.rds_rotate_slow_log;
PROMPT> CALL mysql.rds_rotate_general_log;
```

To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in succession. 

# Configuring RDS for MySQL binary logging for Single-AZ databases
<a name="USER_LogAccess.MySQL.BinaryFormat"></a>

The *binary log* is a set of log files that contain information about data modifications made to an MySQL server instance. The binary log contains information such as the following:
+ Events that describe database changes such as table creation or row modifications
+ Information about the duration of each statement that updated data
+ Events for statements that could have updated data but didn't

The binary log records statements that are sent during replication. It is also required for some recovery operations. For more information, see [The Binary Log](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) in the MySQL documentation.

The automated backups feature determines whether binary logging is turned on or off for MySQL. You have the following options:

Turn binary logging on  
Set the backup retention period to a positive nonzero value.

Turn binary logging off  
Set the backup retention period to zero.

For more information, see [Enabling automated backups](USER_WorkingWithAutomatedBackups.Enabling.md).

MySQL on Amazon RDS supports the *row-based*, *statement-based*, and *mixed* binary logging formats. We recommend mixed unless you need a specific binlog format. For details on the different MySQL binary log formats, see [Binary Logging Formats](https://dev.mysql.com/doc/refman/8.0/en/binary-log-formats.html) in the MySQL documentation.

If you plan to use replication, the binary logging format is important because it determines the record of data changes that is recorded in the source and sent to the replication targets. For information about the advantages and disadvantages of different binary logging formats for replication, see [Advantages and Disadvantages of Statement-Based and Row-Based Replication](https://dev.mysql.com/doc/refman/8.0/en/replication-sbr-rbr.html) in the MySQL documentation.

**Important**  
With MySQL 8.0.34, MySQL deprecated the `binlog_format` parameter. In later MySQL versions, MySQL plans to remove the parameter and only support row-based replication. As a result, we recommend using row-based logging for new MySQL replication setups. For more information, see [binlog\$1format](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_format) in the MySQL documentation.  
MySQL versions 8.0 and 8.4 accept the parameter `binlog_format`. When using this parameter, MySQL issues a deprecation warning. In a future major release, MySQL will remove the parameter `binlog_format`.  
Statement-based replication can cause inconsistencies between the source DB instance and a read replica. For more information, see [Determination of Safe and Unsafe Statements in Binary Logging](https://dev.mysql.com/doc/refman/8.0/en/replication-rbr-safe-unsafe.html) in the MySQL documentation.  
Enabling binary logging increases the number of write disk I/O operations to the DB instance. You can monitor IOPS usage with the `WriteIOPS``` CloudWatch metric.

**To set the MySQL binary logging format**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the DB parameter group, associated with the DB instance, that you want to modify.

   You can't modify a default parameter group. If the DB instance is using a default parameter group, create a new parameter group and associate it with the DB instance.

   For more information on parameter groups, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md).

1. From **Actions**, choose **Edit**.

1. Set the `binlog_format` parameter to the binary logging format of your choice (`ROW`, `STATEMENT`, or `MIXED`).

   You can turn off binary logging by setting the backup retention period of a DB instance to zero, but this disables daily automated backups. Disabling automated backups turns off or disables the `log_bin` session variable. This disables binary logging on the RDS for MySQL DB instance, which in turn resets the `binlog_format` session variable to the default value of `ROW` in the database. We recommend that you don't disable backups. For more information about the **Backup retention period** setting, see [Settings for DB instances](USER_ModifyInstance.Settings.md).

1. Choose **Save changes** to save the updates to the DB parameter group.

Because the `binlog_format` parameter is dynamic in RDS for MySQL, you don't need to reboot the DB instance for the changes to apply. (Note that in Aurora MySQL, this parameter is static. For more information, see [Configuring Aurora MySQL binary logging](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.MySQL.BinaryFormat.html).)

**Important**  
Changing a DB parameter group affects all DB instances that use that parameter group. If you want to specify different binary logging formats for different MySQL DB instances in an AWS Region, the DB instances must use different DB parameter groups. These parameter groups identify different logging formats. Assign the appropriate DB parameter group to the each DB instance.

# Configuring MySQL binary logging for Multi-AZ DB clusters
<a name="USER_Binlog.MultiAZ"></a>

Binary logging in Amazon RDS for MySQL Multi-AZ DB clusters records all database changes to support replication, point-in-time recovery, and auditing. In Multi-AZ DB clusters, binary logs synchronize secondary nodes with the primary node, ensuring data consistency across Availability Zones and enabling seamless failovers. 

To optimize binary logging, Amazon RDS supports binary log transaction compression, which reduces the storage requirements for binary logs and improves replication efficiency.

**Topics**
+ [Binary log transaction compression for Multi-AZ DB clusters](#USER_Binlog.MultiAZ.compression)
+ [Configuring binary log transaction compression for Multi-AZ DB clusters](#USER_Binlog.MultiAZ.configuring)

## Binary log transaction compression for Multi-AZ DB clusters
<a name="USER_Binlog.MultiAZ.compression"></a>

Binary log transaction compression uses the zstd algorithm to reduce the size of transaction data stored in binary logs. When enabled, the MySQL database engine compresses transaction payloads into a single event, minimizing I/O and storage overhead. This feature improves database performance, reduces binary log size, and optimizes resource use for managing and replicating logs in Multi-AZ DB clusters.

Amazon RDS provides binary log transaction compression for RDS for MySQL Multi-AZ DB clusters through the following parameters:
+ `binlog_transaction_compression` – When enabled (`1`), the database engine compresses transaction payloads and writes them to the binary log as a single event. This reduces storage usage and I/O overhead. The parameter is disabled by default.
+ `binlog_transaction_compression_level_zstd` – Configures the zstd compression level for binary log transactions. Higher values increase the compression ratio, reducing storage requirements further but increasing CPU and memory usage for compression. The default value is 3, with a range of 1-22.

These parameters let you fine-tune binary log compression based on workload characteristics and resource availability. For more information, see [Binary Log Transaction Compression](https://dev.mysql.com/doc/refman/8.4/en/binary-log-transaction-compression.html) in the MySQL documentation.

Binary log transaction compression has the following main benefits:
+ Compression decreases the size of binary logs, particularly for workloads with large transactions or high write volumes.
+ Smaller binary logs reduce network and I/O overhead, enhancing replication performance.
+ The `binlog_transaction_compression_level_zstd` parameter provides control over the trade-off between compression ratio and resource consumption.

## Configuring binary log transaction compression for Multi-AZ DB clusters
<a name="USER_Binlog.MultiAZ.configuring"></a>

To configure binary log transaction compression for an RDS for MySQL Multi-AZ DB cluster, modify the relevant cluster parameter settings to match your workload requirements.

### Console
<a name="USER_Binlog.MultiAZ.configuring-console"></a>

**To enable binary log transaction compression**

1. Modify the DB cluster parameter group to set the `binlog_transaction_compression` parameter to `1`.

1. (Optional) Adjust the value of the `binlog_transaction_compression_level_zstd` parameter based on your workload requirements and resource availability.

For more information, see [Modifying parameters in a DB cluster parameter group](USER_WorkingWithParamGroups.ModifyingCluster.md).

### AWS CLI
<a name="USER_Binlog.MultiAZ.configuring-cli"></a>

To configure binary log transaction compression using the AWS CLI, use the [modify-db-cluster-parameter-group](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster-parameter-group.html) command.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name your-cluster-parameter-group \
  --parameters "ParameterName=binlog_transaction_compression,ParameterValue=1,ApplyMethod=pending-reboot"
```
For Windows:  

```
aws rds modify-db-cluster-parameter-group ^
  --db-cluster-parameter-group-name your-cluster-parameter-group ^
  --parameters "ParameterName=binlog_transaction_compression,ParameterValue=1,ApplyMethod=pending-reboot"
```

### RDS API
<a name="USER_Binlog.MultiAZ.configuring-api"></a>

To configure binary log transaction compression using the Amazon RDS API, use the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBClusterParameterGroup.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBClusterParameterGroup.html) operation.

# Accessing MySQL binary logs
<a name="USER_LogAccess.MySQL.Binarylog"></a>

You can use the mysqlbinlog utility to download or stream binary logs from RDS for MySQL DB instances. The binary log is downloaded to your local computer, where you can perform actions such as replaying the log using the mysql utility. For more information about using the mysqlbinlog utility, see [Using mysqlbinlog to back up binary log files](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog-backup.html) in the MySQL documentation.

To run the mysqlbinlog utility against an Amazon RDS instance, use the following options:
+ `--read-from-remote-server` – Required.
+ `--host` – The DNS name from the endpoint of the instance.
+ `--port` – The port used by the instance.
+ `--user` – A MySQL user that has been granted the `REPLICATION SLAVE` permission.
+ `--password` – The password for the MySQL user, or omit a password value so that the utility prompts you for a password.
+ `--raw` – Download the file in binary format.
+ `--result-file` – The local file to receive the raw output.
+ `--stop-never` – Stream the binary log files.
+ `--verbose` – When you use the `ROW` binlog format, include this option to see the row events as pseudo-SQL statements. For more information on the `--verbose` option, see [mysqlbinlog row event display](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog-row-events.html) in the MySQL documentation.
+ Specify the names of one or more binary log files. To get a list of the available logs, use the SQL command `SHOW BINARY LOGS`.

For more information about mysqlbinlog options, see [mysqlbinlog — Utility for processing binary log files](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog.html) in the MySQL documentation.

The following examples show how to use the mysqlbinlog utility.

For Linux, macOS, or Unix:

```
mysqlbinlog \
    --read-from-remote-server \
    --host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com \
    --port=3306  \
    --user ReplUser \
    --password \
    --raw \
    --verbose \
    --result-file=/tmp/ \
    binlog.00098
```

For Windows:

```
mysqlbinlog ^
    --read-from-remote-server ^
    --host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com ^
    --port=3306  ^
    --user ReplUser ^
    --password ^
    --raw ^
    --verbose ^
    --result-file=/tmp/ ^
    binlog.00098
```

Binary logs must remain available on the DB instance for the mysqlbinlog utility to access them. To ensure their availability, use the [mysql.rds\$1set\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_set_configuration) stored procedure and specify a period with enough time for you to download the logs. If this configuration isn't set, Amazon RDS purges the binary logs as soon as possible, leading to gaps in the binary logs that the mysqlbinlog utility retrieves. 

The following example sets the retention period to 1 day.

```
call mysql.rds_set_configuration('binlog retention hours', 24);
```

To display the current setting, use the [mysql.rds\$1show\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_show_configuration) stored procedure.

```
call mysql.rds_show_configuration;
```

# Amazon RDS for Oracle database log files
<a name="USER_LogAccess.Concepts.Oracle"></a>

You can access Oracle alert logs, audit files, and trace files by using the Amazon RDS console or API. For more information about viewing, downloading, and watching file-based database logs, see [Monitoring Amazon RDS log files](USER_LogAccess.md). 

The Oracle audit files provided are the standard Oracle auditing files. Amazon RDS supports the Oracle fine-grained auditing (FGA) feature. However, log access doesn't provide access to FGA events that are stored in the `SYS.FGA_LOG$` table and that are accessible through the `DBA_FGA_AUDIT_TRAIL` view. 

The [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html) API operation that lists the Oracle log files that are available for a DB instance ignores the `MaxRecords` parameter and returns up to 1,000 records. The call returns `LastWritten` as a POSIX date in milliseconds.

**Topics**
+ [Retention schedule](#USER_LogAccess.Concepts.Oracle.Retention)
+ [Working with Oracle trace files](#USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles)
+ [Publishing Oracle logs to Amazon CloudWatch Logs](#USER_LogAccess.Oracle.PublishtoCloudWatchLogs)
+ [Accessing alert logs and listener logs](#USER_LogAccess.Concepts.Oracle.AlertLogAndListenerLog)

## Retention schedule
<a name="USER_LogAccess.Concepts.Oracle.Retention"></a>

The Oracle database engine might rotate log files if they get very large. To retain audit or trace files, download them. If you store the files locally, you reduce your Amazon RDS storage costs and make more space available for your data. 

The following table shows the retention schedule for Oracle alert logs, audit files, and trace files on Amazon RDS. 


****  

| Log type | Retention schedule | 
| --- | --- | 
|  Alert logs  |   The text alert log is rotated daily with 30-day retention managed by Amazon RDS. The XML alert log is retained for at least seven days. You can access this log by using the `ALERTLOG` view.   | 
|  Audit files  |   The default retention period for audit files is seven days. Amazon RDS might delete audit files older than seven days.   | 
|  Trace files  |  The default retention period for trace files is seven days. Amazon RDS might delete trace files older than seven days.   | 
|  Listener logs  |   The default retention period for the listener logs is seven days. Amazon RDS might delete listener logs older than seven days.   | 

**Note**  
Audit files and trace files share the same retention configuration.

## Working with Oracle trace files
<a name="USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles"></a>

Following, you can find descriptions of Amazon RDS procedures to create, refresh, access, and delete trace files.

**Topics**
+ [Listing files](#USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.ViewingBackgroundDumpDest)
+ [Generating trace files and tracing a session](#USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.Generating)
+ [Retrieving trace files](#USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.Retrieving)
+ [Purging trace files](#USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.Purging)

### Listing files
<a name="USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.ViewingBackgroundDumpDest"></a>

You can use either of two procedures to allow access to any file in the `background_dump_dest` path. The first procedure refreshes a view containing a listing of all files currently in `background_dump_dest`. 

```
1. EXEC rdsadmin.manage_tracefiles.refresh_tracefile_listing;
```

After the view is refreshed, query the following view to access the results.

```
1. SELECT * FROM rdsadmin.tracefile_listing;
```

An alternative to the previous process is to use `FROM table` to stream nonrelational data in a table-like format to list database directory contents.

```
1. SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir('BDUMP'));
```

The following query shows the text of a log file.

```
1. SELECT text FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP','alert_dbname.log.date'));
```

On a read replica, get the name of the BDUMP directory by querying `V$DATABASE.DB_UNIQUE_NAME`. If the unique name is `DATABASE_B`, then the BDUMP directory is `BDUMP_B`. The following example queries the BDUMP name on a replica and then uses this name to query the contents of `alert_DATABASE.log.2020-06-23`.

```
1. SELECT 'BDUMP' || (SELECT regexp_replace(DB_UNIQUE_NAME,'.*(_[A-Z])', '\1') FROM V$DATABASE) AS BDUMP_VARIABLE FROM DUAL;
2. 
3. BDUMP_VARIABLE
4. --------------
5. BDUMP_B
6. 
7. SELECT TEXT FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP_B','alert_DATABASE.log.2020-06-23'));
```

### Generating trace files and tracing a session
<a name="USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.Generating"></a>

Because there are no restrictions on `ALTER SESSION`, many standard methods to generate trace files in Oracle remain available to an Amazon RDS DB instance. The following procedures are provided for trace files that require greater access. 


****  

|  Oracle method  |  Amazon RDS method | 
| --- | --- | 
|  `oradebug hanganalyze 3 `  |  `EXEC rdsadmin.manage_tracefiles.hanganalyze; `  | 
|  `oradebug dump systemstate 266 `  |  `EXEC rdsadmin.manage_tracefiles.dump_systemstate;`  | 

You can use many standard methods to trace individual sessions connected to an Oracle DB instance in Amazon RDS. To enable tracing for a session, you can run subprograms in PL/SQL packages supplied by Oracle, such as `DBMS_SESSION` and `DBMS_MONITOR`. For more information, see [ Enabling tracing for a session](https://docs.oracle.com/database/121/TGSQL/tgsql_trace.htm#GUID-F872D6F9-E015-481F-80F6-8A7036A6AD29) in the Oracle documentation. 

### Retrieving trace files
<a name="USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.Retrieving"></a>

You can retrieve any trace file in `background_dump_dest` using a standard SQL query on an Amazon RDS–managed external table. To use this method, you must execute the procedure to set the location for this table to the specific trace file. 

For example, you can use the `rdsadmin.tracefile_listing` view mentioned preceding to list all of the trace files on the system. You can then set the `tracefile_table` view to point to the intended trace file using the following procedure. 

```
1. EXEC rdsadmin.manage_tracefiles.set_tracefile_table_location('CUST01_ora_3260_SYSTEMSTATE.trc');
```

The following example creates an external table in the current schema with the location set to the file provided. You can retrieve the contents into a local file using a SQL query. 

```
1. SPOOL /tmp/tracefile.txt
2. SELECT * FROM tracefile_table;
3. SPOOL OFF;
```

### Purging trace files
<a name="USER_LogAccess.Concepts.Oracle.WorkingWithTracefiles.Purging"></a>

Trace files can accumulate and consume disk space. Amazon RDS purges trace files by default and log files that are older than seven days. You can view and set the trace file retention period using the `show_configuration` procedure. You should run the command `SET SERVEROUTPUT ON` so that you can view the configuration results. 

The following example shows the current trace file retention period, and then sets a new trace file retention period. 

```
 1. # Show the current tracefile retention
 2. SQL> EXEC rdsadmin.rdsadmin_util.show_configuration;
 3. NAME:tracefile retention
 4. VALUE:10080
 5. DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted.
 6. 		
 7. # Set the tracefile retention to 24 hours:
 8. SQL> EXEC rdsadmin.rdsadmin_util.set_configuration('tracefile retention',1440);
 9. SQL> commit;
10. 
11. #show the new tracefile retention
12. SQL> EXEC rdsadmin.rdsadmin_util.show_configuration;
13. NAME:tracefile retention
14. VALUE:1440
15. DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted.
```

In addition to the periodic purge process, you can manually remove files from the `background_dump_dest`. The following example shows how to purge all files older than five minutes. 

```
EXEC rdsadmin.manage_tracefiles.purge_tracefiles(5);
```

You can also purge all files that match a specific pattern (if you do, don't include the file extension, such as .trc). The following example shows how to purge all files that start with `SCHPOC1_ora_5935`. 

```
1. EXEC rdsadmin.manage_tracefiles.purge_tracefiles('SCHPOC1_ora_5935');
```

## Publishing Oracle logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.Oracle.PublishtoCloudWatchLogs"></a>

You can configure your RDS for Oracle DB instance to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can analyze the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage. 

Amazon RDS publishes each Oracle database log as a separate database stream in the log group. For example, if you configure the export function to include the audit log, audit data is stored in an audit log stream in the `/aws/rds/instance/my_instance/audit` log group. The following table summarizes the requirements for RDS for Oracle to publish logs to Amazon CloudWatch Logs.


| Log name | Requirement | Default | 
| --- | --- | --- | 
|  Alert log  |  None. You can't disable this log.  |  Enabled  | 
|  Trace log  |  Set the `trace_enabled` parameter to `TRUE` or leave it set at the default.  |  `TRUE`  | 
|  Audit log  |  Set the `audit_trail` parameter to any of the following allowed values: <pre>{ none | os | db [, extended] | xml [, extended] }</pre>  |  `none`  | 
|  Listener log  |  None. You can't disable this log.  |  Enabled  | 
|  Oracle Management Agent log  |  None. You can't disable this log.  |  Enabled  | 

This Oracle Management Agent log consists of the log groups shown in the following table.


****  

| Log name | CloudWatch log group | 
| --- | --- | 
| emctl.log | oemagent-emctl | 
| emdctlj.log | oemagent-emdctlj | 
| gcagent.log | oemagent-gcagent | 
| gcagent\$1errors.log | oemagent-gcagent-errors | 
| emagent.nohup | oemagent-emagent-nohup | 
| secure.log | oemagent-secure | 

For more information, see [Locating Management Agent Log and Trace Files](https://docs.oracle.com/en/enterprise-manager/cloud-control/enterprise-manager-cloud-control/13.4/emadm/locating-management-agent-log-and-trace-files1.html#GUID-9C710D78-6AA4-42E4-83CD-47B5FF4892DF) in the Oracle documentation.

### Console
<a name="USER_LogAccess.Oracle.PublishtoCloudWatchLogs.console"></a>

**To publish Oracle DB logs to CloudWatch Logs from the AWS Management Console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance that you want to modify.

1. Choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

1. Choose **Continue**, and then choose **Modify DB Instance** on the summary page.

### AWS CLI
<a name="USER_LogAccess.Oracle.PublishtoCloudWatchLogs.CLI"></a>

To publish Oracle logs, you can use the [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command with the following parameters: 
+ `--db-instance-identifier`
+ `--cloudwatch-logs-export-configuration`

**Note**  
A change to the `--cloudwatch-logs-export-configuration` option is always applied to the DB instance immediately. Therefore, the `--apply-immediately` and `--no-apply-immediately` options have no effect.

You can also publish Oracle logs using the following commands: 
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-s3.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-s3.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html)

**Example**  
The following example creates an Oracle DB instance with CloudWatch Logs publishing enabled. The `--cloudwatch-logs-export-configuration` value is a JSON array of strings. The strings can be any combination of `alert`, `audit`, `listener`, and `trace`.  
For Linux, macOS, or Unix:  

```
aws rds create-db-instance \
    --db-instance-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '["trace","audit","alert","listener","oemagent"]' \
    --db-instance-class db.m5.large \
    --allocated-storage 20 \
    --engine oracle-ee \
    --engine-version 19.0.0.0.ru-2024-04.rur-2024-04.r1 \
    --license-model bring-your-own-license \
    --master-username myadmin \
    --manage-master-user-password
```
For Windows:  

```
aws rds create-db-instance ^
    --db-instance-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration trace alert audit listener oemagent ^
    --db-instance-class db.m5.large ^
    --allocated-storage 20 ^
    --engine oracle-ee ^
    --engine-version 19.0.0.0.ru-2024-04.rur-2024-04.r1 ^
    --license-model bring-your-own-license ^
    --master-username myadmin ^
    --manage-master-user-password
```

**Example**  
The following example modifies an existing Oracle DB instance to publish log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `EnableLogTypes`, and its value is an array of strings with any combination of `alert`, `audit`, `listener`, and `trace`.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '{"EnableLogTypes":["trace","alert","audit","listener","oemagent"]}'
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration EnableLogTypes=\"trace\",\"alert\",\"audit\",\"listener\",\"oemagent\"
```

**Example**  
The following example modifies an existing Oracle DB instance to disable publishing audit and listener log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `DisableLogTypes`, and its value is an array of strings with any combination of `alert`, `audit`, `listener`, and `trace`.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '{"DisableLogTypes":["audit","listener"]}'
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration DisableLogTypes=\"audit\",\"listener\"
```

### RDS API
<a name="USER_LogAccess.Oracle.PublishtoCloudWatchLogs.API"></a>

You can publish Oracle DB logs with the RDS API. You can call the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) action with the following parameters: 
+ `DBInstanceIdentifier`
+ `CloudwatchLogsExportConfiguration`

**Note**  
A change to the `CloudwatchLogsExportConfiguration` parameter is always applied to the DB instance immediately. Therefore, the `ApplyImmediately` parameter has no effect.

You can also publish Oracle logs by calling the following RDS API operations: 
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html)

Run one of these RDS API operations with the following parameters: 
+ `DBInstanceIdentifier`
+ `EnableCloudwatchLogsExports`
+ `Engine`
+ `DBInstanceClass`

Other parameters might be required depending on the RDS operation that you run.

## Accessing alert logs and listener logs
<a name="USER_LogAccess.Concepts.Oracle.AlertLogAndListenerLog"></a>

You can view the alert log using the Amazon RDS console. You can also use the following SQL statement.

```
1. SELECT message_text FROM alertlog;
```

Access the listener log using Amazon CloudWatch Logs.

**Note**  
Oracle rotates the alert and listener logs when they exceed 10 MB, at which point they are unavailable from Amazon RDS views.

# RDS for PostgreSQL database log files
<a name="USER_LogAccess.Concepts.PostgreSQL"></a>

You can monitor the following types of log files:
+ PostgreSQL log
+ Upgrade log
+ IAM database authentication error log
**Note**  
To enable IAM database authentication error logs, you must first enable IAM database authentication for your RDS for PostgreSQL DB instance. For more information about enabling IAM database authentication, see [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md).

RDS for PostgreSQL logs database activities to the default PostgreSQL log file. For an on-premises PostgreSQL DB instance, these messages are stored locally in `log/postgresql.log`. For an RDS for PostgreSQL DB instance, the log file is available on the Amazon RDS instance. These logs are also accessible via the AWS Management Console, where you can view or download them. The default logging level captures login failures, fatal server errors, deadlocks, and query failures.

For more information about how you can view, download, and watch file-based database logs, see [Monitoring Amazon RDS log files](USER_LogAccess.md). To learn more about PostgreSQL logs, see [Working with Amazon RDS and Aurora PostgreSQL logs: Part 1](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/) and [ Working with Amazon RDS and Aurora PostgreSQL logs: Part 2](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-2/). 

In addition to the standard PostgreSQL logs discussed in this topic, RDS for PostgreSQL also supports the PostgreSQL Audit extension (`pgAudit`). Most regulated industries and government agencies need to maintain an audit log or audit trail of changes made to data to comply with legal requirements. For information about installing and using pgAudit, see [Using pgAudit to log database activity](Appendix.PostgreSQL.CommonDBATasks.pgaudit.md).

**Topics**
+ [Parameters for logging in RDS for PostgreSQL](USER_LogAccess.Concepts.PostgreSQL.overview.parameter-groups.md)
+ [Turning on query logging for your RDS for PostgreSQL DB instance](USER_LogAccess.Concepts.PostgreSQL.Query_Logging.md)
+ [Publishing PostgreSQL logs to Amazon CloudWatch Logs](#USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs)

# Parameters for logging in RDS for PostgreSQL
<a name="USER_LogAccess.Concepts.PostgreSQL.overview.parameter-groups"></a>

You can customize the logging behavior for your RDS for PostgreSQL DB instance by modifying various parameters. In the following table you can find the parameters that affect how long the logs are stored, when to rotate the log, and whether to output the log as a CSV (comma-separated value) format. You can also find the text output sent to STDERR, among other settings. To change settings for the parameters that are modifiable, use a custom DB parameter group for your RDS for PostgreSQL instance. For more information, see [DB parameter groups for Amazon RDS DB instances](USER_WorkingWithDBInstanceParamGroups.md).


| Parameter | Default | Description | 
| --- | --- | --- | 
| log\$1destination | stderr | Sets the output format for the log. The default is `stderr` but you can also specify comma-separated value (CSV) by adding `csvlog` to the setting. For more information, see [Setting the log destination (`stderr`, `csvlog`)](#USER_LogAccess.Concepts.PostgreSQL.Log_Format).  | 
| log\$1filename |  postgresql.log.%Y-%m-%d-%H  | Specifies the pattern for the log file name. In addition to the default, this parameter supports `postgresql.log.%Y-%m-%d` and `postgresql.log.%Y-%m-%d-%H%M` for the filename pattern.  | 
| log\$1line\$1prefix | %t:%r:%u@%d:[%p]: | Defines the prefix for each log line that gets written to `stderr`, to note the time (%t), remote host (%r), user (%u), database (%d), and process ID (%p). | 
| log\$1rotation\$1age | 60 | Minutes after which log file is automatically rotated. You can change this value within the range of 1 and 1440 minutes. For more information, see [Setting log file rotation](#USER_LogAccess.Concepts.PostgreSQL.log_rotation).  | 
| log\$1rotation\$1size | – | The size (kB) at which the log is automatically rotated. By default, this parameter isn't used because logs are rotated based on the `log_rotation_age` parameter. To learn more, see [Setting log file rotation](#USER_LogAccess.Concepts.PostgreSQL.log_rotation). | 
| rds.log\$1retention\$1period | 4320 | PostgreSQL logs that are older than the specified number of minutes are deleted. The default value of 4320 minutes deletes log files after 3 days. For more information, see [Setting the log retention period](#USER_LogAccess.Concepts.PostgreSQL.log_retention_period). | 

To identify application issues, you can look for query failures, login failures, deadlocks, and fatal server errors in the log. For example, suppose that you converted a legacy application from Oracle to Amazon RDS PostgreSQL, but not all queries converted correctly. These incorrectly formatted queries generate error messages that you can find in the logs to help identify problems. For more information about logging queries, see [Turning on query logging for your RDS for PostgreSQL DB instance](USER_LogAccess.Concepts.PostgreSQL.Query_Logging.md). 

In the following topics, you can find information about how to set various parameters that control the basic details for your PostgreSQL logs. 

**Topics**
+ [Setting the log retention period](#USER_LogAccess.Concepts.PostgreSQL.log_retention_period)
+ [Setting log file rotation](#USER_LogAccess.Concepts.PostgreSQL.log_rotation)
+ [Setting the log destination (`stderr`, `csvlog`)](#USER_LogAccess.Concepts.PostgreSQL.Log_Format)
+ [Understanding the log\$1line\$1prefix parameter](#USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix)

## Setting the log retention period
<a name="USER_LogAccess.Concepts.PostgreSQL.log_retention_period"></a>

The `rds.log_retention_period` parameter specifies how long your RDS for PostgreSQL DB instance keeps its log files. The default setting is 3 days (4,320 minutes), but you can set this value to anywhere from 1 day (1,440 minutes) to 7 days (10,080 minutes). Be sure that your RDS for PostgreSQL DB instance has sufficient storage to hold the log files for the period of time.

We recommend that you have your logs routinely published to Amazon CloudWatch Logs so that you can view and analyze system data long after the logs have been removed from your RDS for PostgreSQL DB instance. For more information, see [Publishing PostgreSQL logs to Amazon CloudWatch Logs](USER_LogAccess.Concepts.PostgreSQL.md#USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs).  

## Setting log file rotation
<a name="USER_LogAccess.Concepts.PostgreSQL.log_rotation"></a>

Amazon RDS creates new log files every hour by default. The timing is controlled by the `log_rotation_age` parameter. This parameter has a default value of 60 (minutes), but you can set it to anywhere from 1 minute to 24 hours (1,440 minutes). When it's time for rotation, a new distinct log file is created. The file is named according to the pattern specified by the `log_filename` parameter. 

Log files can also be rotated according to their size, as specified in the `log_rotation_size` parameter. This parameter specifies that the log should be rotated when it reaches the specified size (in kilobytes). For an RDS for PostgreSQL DB instance, `log_rotation_size` is unset, that is, there is no value specified. However, you can set the parameter from 0-2097151 kB (kilobytes).  

The log file names are based on the file name pattern specified in the `log_filename` parameter. The available settings for this parameter are as follows:
+ `postgresql.log.%Y-%m-%d` – Default format for the log file name. Includes the year, month, and date in the name of the log file.
+ `postgresql.log.%Y-%m-%d-%H` – Includes the hour in the log file name format.

For more information, see [https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-AGE](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-AGE) and [https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-SIZE](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-SIZE) in the PostgreSQL documentation.

## Setting the log destination (`stderr`, `csvlog`)
<a name="USER_LogAccess.Concepts.PostgreSQL.Log_Format"></a>

By default, Amazon RDS PostgreSQL generates logs in standard error (stderr) format. This format is the default setting for the `log_destination` parameter. Each message is prefixed using the pattern specified in the `log_line_prefix` parameter. For more information, see [Understanding the log\$1line\$1prefix parameter](#USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix). 

RDS for PostgreSQL can also generate the logs in `csvlog` format. The `csvlog` is useful for analyzing the log data as comma-separated values (CSV) data. For example, suppose that you use the `log_fdw` extension to work with your logs as foreign tables. The foreign table created on `stderr` log files contains a single column with log event data. By adding `csvlog` to the `log_destination` parameter, you get the log file in the CSV format with demarcations for the multiple columns of the foreign table. You can now sort and analyze your logs more easily. To learn how to use the `log_fdw` with `csvlog`, see [Using the log\$1fdw extension to access the DB log using SQL](CHAP_PostgreSQL.Extensions.log_fdw.md).

If you specify `csvlog` for this parameter, be aware that both `stderr` and `csvlog` files are generated. Be sure to monitor the storage consumed by the logs, taking into account the `rds.log_retention_period` and other settings that affect log storage and turnover. Using `stderr` and `csvlog` more than doubles the storage consumed by the logs.

If you add `csvlog` to `log_destination` and you want to revert to the `stderr` alone, you need to reset the parameter. To do so, open the Amazon RDS Console and then open the custom DB parameter group for your instance. Choose the `log_destination` parameter, choose **Edit parameter**, and then choose **Reset**. 

For more information about configuring logging, see [ Working with Amazon RDS and Aurora PostgreSQL logs: Part 1](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/).

## Understanding the log\$1line\$1prefix parameter
<a name="USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix"></a>

The `stderr` log format prefixes each log message with the details specified by the `log_line_prefix` parameter. The default value is:

```
%t:%r:%u@%d:[%p]:t
```

Starting from Aurora PostgreSQL version 16, you can also choose:

```
%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a
```

Each log entry sent to stderr includes the following information based on the selected value:
+ `%t` – Time of log entry without milliseconds
+ `%m` – Time of log entry with milliseconds
+  `%r` – Remote host address
+  `%u@%d` – User name @ database name
+  `[%p]` – Process ID if available
+  `%l` – Log line number per session 
+  `%e` – SQL error code 
+  `%s` – Process start timestamp 
+  `%v` – Virtual transaction id 
+  `%x` – Transaction ID 
+  `%c` – Session ID 
+  `%q` – Non-session terminator 
+  `%a` – Application name 

# Turning on query logging for your RDS for PostgreSQL DB instance
<a name="USER_LogAccess.Concepts.PostgreSQL.Query_Logging"></a>

You can collect more detailed information about your database activities, including queries, queries waiting for locks, checkpoints, and many other details by setting some of the parameters listed in the following table. This topic focuses on logging queries.


| Parameter | Default | Description | 
| --- | --- | --- | 
| log\$1connections | – | Logs each successful connection.  | 
| log\$1disconnections | – | Logs the end of each session and its duration.  | 
| log\$1checkpoints | 1 | Logs each checkpoint.  | 
| log\$1lock\$1waits | – | Logs long lock waits. By default, this parameter isn't set. | 
| log\$1min\$1duration\$1sample | – | (ms) Sets the minimum execution time above which a sample of statements is logged. Sample size is set using the log\$1statement\$1sample\$1rate parameter. | 
| log\$1min\$1duration\$1statement | – | Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default, this parameter isn't set. Turning on this parameter can help you find unoptimized queries. | 
| log\$1statement | – | Sets the type of statements logged. By default, this parameter isn't set, but you can change it to `all`, `ddl`, or `mod` to specify the types of SQL statements that you want logged. If you specify anything other than `none` for this parameter, you should also take additional steps to prevent the exposure of passwords in the log files. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk).  | 
| log\$1statement\$1sample\$1rate | – | The percentage of statements exceeding the time specified in `log_min_duration_sample` to be logged, expressed as a floating point value between 0.0 and 1.0.  | 
| log\$1statement\$1stats | – | Writes cumulative performance statistics to the server log. | 

## Using logging to find slow performing queries
<a name="USER_LogAccess.Concepts.PostgreSQL.Query_Logging.using"></a>

You can log SQL statements and queries to help find slow performing queries. You turn on this capability by modifying the settings in the `log_statement` and `log_min_duration` parameters as outlined in this section. Before turning on query logging for your RDS for PostgreSQL DB instance, you should be aware of possible password exposure in the logs and how to mitigate the risks. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk). 

Following, you can find reference information about the `log_statement` and `log_min_duration` parameters.log\$1statement

This parameter specifies the type of SQL statements that should get sent to the log. The default value is `none`. If you change this parameter to `all`, `ddl`, or `mod`, be sure to apply recommended actions to mitigate the risk of exposing passwords in the logs. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk). 

**all**  
Logs all statements. This setting is recommended for debugging purposes.

**ddl**  
Logs all data definition language (DDL) statements, such as CREATE, ALTER, DROP, and so on.

**mod**  
Logs all DDL statements and data manipulation language (DML) statements, such as INSERT, UPDATE, and DELETE, which modify the data.

**none**  
No SQL statements get logged. We recommend this setting to avoid the risk of exposing passwords in the logs.log\$1min\$1duration\$1statement

Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default, this parameter isn't set. Turning on this parameter can help you find unoptimized queries.

**–1–2147483647**  
The number of milliseconds (ms) of runtime over which a statement gets logged.

**To set up query logging**

These steps assume that your RDS for PostgreSQL DB instance uses a custom DB parameter group. 

1. Set the `log_statement` parameter to `all`. The following example shows the information that is written to the `postgresql.log` file with this parameter setting.

   ```
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: statement: SELECT feedback, s.sentiment,s.confidence
   FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
   ORDER BY s.confidence DESC;
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: QUERY STATISTICS
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:DETAIL: ! system usage stats:
   ! 0.017355 s user, 0.000000 s system, 0.168593 s elapsed
   ! [0.025146 s user, 0.000000 s system total]
   ! 36644 kB max resident size
   ! 0/8 [0/8] filesystem blocks in/out
   ! 0/733 [0/1364] page faults/reclaims, 0 [0] swaps
   ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
   ! 19/0 [27/0] voluntary/involuntary context switches
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: SELECT feedback, s.sentiment,s.confidence
   FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
   ORDER BY s.confidence DESC;
   2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:ERROR: syntax error at or near "ORDER" at character 1
   2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: ORDER BY s.confidence DESC;
   ----------------------- END OF LOG ----------------------
   ```

1. Set the `log_min_duration_statement` parameter. The following example shows the information that is written to the `postgresql.log` file when the parameter is set to `1`.

   Queries that exceed the duration specified in the `log_min_duration_statement` parameter are logged. The following shows an example. You can view the log file for your RDS for PostgreSQL DB instance in the Amazon RDS Console. 

   ```
   2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: statement: DROP table comments;
   2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: duration: 167.754 ms
   2022-10-05 19:08:07 UTC::@:[355]:LOG: checkpoint starting: time
   2022-10-05 19:08:08 UTC::@:[355]:LOG: checkpoint complete: wrote 11 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.013 s, sync=0.006 s, total=1.033 s; sync files=8, longest=0.004 s, average=0.001 s; distance=131028 kB, estimate=131028 kB
   ----------------------- END OF LOG ----------------------
   ```

### Mitigating risk of password exposure when using query logging
<a name="USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk"></a>

We recommend that you keep `log_statement` set to `none` to avoid exposing passwords. If you set `log_statement` to `all`, `ddl`, or `mod`, we recommend that you take one or more of the following steps.
+ For the client, encrypt sensitive information. For more information, see [Encryption Options](https://www.postgresql.org/docs/current/encryption-options.html) in the PostgreSQL documentation. Use the `ENCRYPTED` (and `UNENCRYPTED`) options of the `CREATE` and `ALTER` statements. For more information, see [CREATE USER](https://www.postgresql.org/docs/current/sql-createuser.html) in the PostgreSQL documentation.
+ For your RDS for PostgreSQL DB instance, set up and use the PostgreSQL Auditing (pgAudit) extension. This extension redacts sensitive information in CREATE and ALTER statements sent to the log. For more information, see [Using pgAudit to log database activity](Appendix.PostgreSQL.CommonDBATasks.pgaudit.md). 
+ Restrict access to the CloudWatch logs.
+ Use stronger authentication mechanisms such as IAM.

## Publishing PostgreSQL logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs"></a>

To store your PostgreSQL log records in highly durable storage, you can use Amazon CloudWatch Logs. With CloudWatch Logs, you can also perform real-time analysis of log data and use CloudWatch to view metrics and create alarms. For example, if you set `log_statement` to `ddl`, you can set up an alarm to alert you whenever a DDL statement is executed. You can choose to have your PostgreSQL logs uploaded to CloudWatch Logs during the process of creating your RDS for PostgreSQL DB instance. If you chose not to upload logs at that time, you can later modify your instance to start uploading logs from that point forward. In other words, existing logs aren't uploaded. Only new logs are uploaded as they're created on your modified RDS for PostgreSQL DB instance.

All currently available RDS for PostgreSQL versions support publishing log files to CloudWatch Logs. For more information, see [Amazon RDS for PostgreSQL updates](https://docs.aws.amazon.com/AmazonRDS/latest/PostgreSQLReleaseNotes/postgresql-versions.html) in the *Amazon RDS for PostgreSQL Release Notes.*. 

To work with CloudWatch Logs, configure your RDS for PostgreSQL DB instance to publish log data to a log group.

You can publish the following log types to CloudWatch Logs for RDS for PostgreSQL: 
+ PostgreSQL log
+ Upgrade log 
+ IAM database authentication error log

After you complete the configuration, Amazon RDS publishes the log events to log streams within a CloudWatch log group. For example, the PostgreSQL log data is stored within the log group `/aws/rds/instance/my_instance/postgresql`. To view your logs, open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

### Console
<a name="USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs.CON"></a>

**To publish PostgreSQL logs to CloudWatch Logs using the console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB instance that you want to modify, and then choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

   The **Log exports** section is available only for PostgreSQL versions that support publishing to CloudWatch Logs. 

1. Choose **Continue**, and then choose **Modify DB Instance** on the summary page.

### AWS CLI
<a name="USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs.CLI"></a>

You can publish PostgreSQL logs with the AWS CLI. You can call the [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command with the following parameters.
+ `--db-instance-identifier`
+ `--cloudwatch-logs-export-configuration`

**Note**  
A change to the `--cloudwatch-logs-export-configuration` option is always applied to the DB instance immediately. Therefore, the `--apply-immediately` and `--no-apply-immediately` options have no effect.

You can also publish PostgreSQL logs by calling the following CLI commands:
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html)

Run one of these CLI commands with the following options: 
+ `--db-instance-identifier`
+ `--enable-cloudwatch-logs-exports`
+ `--db-instance-class`
+ `--engine`

Other options might be required depending on the CLI command you run.

**Example Modify an instance to publish logs to CloudWatch Logs**  
The following example modifies an existing PostgreSQL DB instance to publish log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `EnableLogTypes`, and its value is an array of strings with any combination of `postgresql` and `upgrade`.  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql", "upgrade"]}'
```
For Windows:  

```
1. aws rds modify-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql","upgrade"]}'
```

**Example Create an instance to publish logs to CloudWatch Logs**  
The following example creates a PostgreSQL DB instance and publishes log files to CloudWatch Logs. The `--enable-cloudwatch-logs-exports` value is a JSON array of strings. The strings can be any combination of `postgresql` and `upgrade`.  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --enable-cloudwatch-logs-exports '["postgresql","upgrade"]' \
4.     --db-instance-class db.m4.large \
5.     --engine postgres
```
For Windows:  

```
1. aws rds create-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --enable-cloudwatch-logs-exports '["postgresql","upgrade"]' ^
4.     --db-instance-class db.m4.large ^
5.     --engine postgres
```

### RDS API
<a name="USER_LogAccess.Concepts.PostgreSQL.PublishtoCloudWatchLogs.API"></a>

You can publish PostgreSQL logs with the RDS API. You can call the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) action with the following parameters: 
+ `DBInstanceIdentifier`
+ `CloudwatchLogsExportConfiguration`

**Note**  
A change to the `CloudwatchLogsExportConfiguration` parameter is always applied to the DB instance immediately. Therefore, the `ApplyImmediately` parameter has no effect.

You can also publish PostgreSQL logs by calling the following RDS API operations: 
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html)

Run one of these RDS API operations with the following parameters: 
+ `DBInstanceIdentifier`
+ `EnableCloudwatchLogsExports`
+ `Engine`
+ `DBInstanceClass`

Other parameters might be required depending on the operation that you run.

 

# Monitoring Amazon RDS API calls in AWS CloudTrail
<a name="logging-using-cloudtrail"></a>

AWS CloudTrail is an AWS service that helps you audit your AWS account. AWS CloudTrail is turned on for your AWS account when you create it. For more information about CloudTrail, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).

**Topics**
+ [CloudTrail integration with Amazon RDS](#service-name-info-in-cloudtrail)
+ [Amazon RDS log file entries](#understanding-service-name-entries)

## CloudTrail integration with Amazon RDS
<a name="service-name-info-in-cloudtrail"></a>

All Amazon RDS actions are logged by CloudTrail. CloudTrail provides a record of actions taken by a user, role, or an AWS service in Amazon RDS.

### CloudTrail events
<a name="service-name-info-in-cloudtrail.events"></a>

CloudTrail captures API calls for Amazon RDS as events. An event represents a single request from any source and includes information about the requested action, the date and time of the action, request parameters, and so on. Events include calls from the Amazon RDS console and from code calls to the Amazon RDS API operations. 

Amazon RDS activity is recorded in a CloudTrail event in **Event history**. You can use the CloudTrail console to view the last 90 days of recorded API activity and events in an AWS Region. For more information, see [Viewing events with CloudTrail event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html). 

### CloudTrail trails
<a name="service-name-info-in-cloudtrail.trails"></a>

For an ongoing record of events in your AWS account, including events for Amazon RDS, create a trail. A trail is a configuration that enables delivery of events to a specified Amazon S3 bucket. CloudTrail typically delivers log files within 15 minutes of account activity.

**Note**  
If you don't configure a trail, you can still view the most recent events in the CloudTrail console in **Event history**.

You can create two types of trails for an AWS account: a trail that applies to all Regions, or a trail that applies to one Region. By default, when you create a trail in the console, the trail applies to all Regions. 

Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see: 
+ [Overview for creating a trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [CloudTrail supported services and integrations](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html#cloudtrail-aws-service-specific-topics-integrations)
+ [Configuring Amazon SNS notifications for CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/getting_notifications_top_level.html)
+ [Receiving CloudTrail log files from multiple Regions](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html) and [Receiving CloudTrail log files from multiple accounts](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html)

## Amazon RDS log file entries
<a name="understanding-service-name-entries"></a>

CloudTrail log files contain one or more log entries. CloudTrail log files are not an ordered stack trace of the public API calls, so they do not appear in any specific order. 

The following example shows a CloudTrail log entry that demonstrates the `CreateDBInstance` action.

```
{
    "eventVersion": "1.04",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "AKIAIOSFODNN7EXAMPLE",
        "arn": "arn:aws:iam::123456789012:user/johndoe",
        "accountId": "123456789012",
        "accessKeyId": "AKIAI44QH8DHBEXAMPLE",
        "userName": "johndoe"
    },
    "eventTime": "2018-07-30T22:14:06Z",
    "eventSource": "rds.amazonaws.com",
    "eventName": "CreateDBInstance",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "192.0.2.0",
    "userAgent": "aws-cli/1.15.42 Python/3.6.1 Darwin/17.7.0 botocore/1.10.42",
    "requestParameters": {
        "enableCloudwatchLogsExports": [
            "audit",
            "error",
            "general",
            "slowquery"
        ],
        "dBInstanceIdentifier": "test-instance",
        "engine": "mysql",
        "masterUsername": "myawsuser",
        "allocatedStorage": 20,
        "dBInstanceClass": "db.m1.small",
        "masterUserPassword": "****"
    },
    "responseElements": {
        "dBInstanceArn": "arn:aws:rds:us-east-1:123456789012:db:test-instance",
        "storageEncrypted": false,
        "preferredBackupWindow": "10:27-10:57",
        "preferredMaintenanceWindow": "sat:05:47-sat:06:17",
        "backupRetentionPeriod": 1,
        "allocatedStorage": 20,
        "storageType": "standard",
        "engineVersion": "8.0.28",
        "dbInstancePort": 0,
        "optionGroupMemberships": [
            {
                "status": "in-sync",
                "optionGroupName": "default:mysql-8-0"
            }
        ],
        "dBParameterGroups": [
            {
                "dBParameterGroupName": "default.mysql8.0",
                "parameterApplyStatus": "in-sync"
            }
        ],
        "monitoringInterval": 0,
        "dBInstanceClass": "db.m1.small",
        "readReplicaDBInstanceIdentifiers": [],
        "dBSubnetGroup": {
            "dBSubnetGroupName": "default",
            "dBSubnetGroupDescription": "default",
            "subnets": [
                {
                    "subnetAvailabilityZone": {"name": "us-east-1b"},
                    "subnetIdentifier": "subnet-cbfff283",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1e"},
                    "subnetIdentifier": "subnet-d7c825e8",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1f"},
                    "subnetIdentifier": "subnet-6746046b",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1c"},
                    "subnetIdentifier": "subnet-bac383e0",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1d"},
                    "subnetIdentifier": "subnet-42599426",
                    "subnetStatus": "Active"
                },
                {
                    "subnetAvailabilityZone": {"name": "us-east-1a"},
                    "subnetIdentifier": "subnet-da327bf6",
                    "subnetStatus": "Active"
                }
            ],
            "vpcId": "vpc-136a4c6a",
            "subnetGroupStatus": "Complete"
        },
        "masterUsername": "myawsuser",
        "multiAZ": false,
        "autoMinorVersionUpgrade": true,
        "engine": "mysql",
        "cACertificateIdentifier": "rds-ca-2015",
        "dbiResourceId": "db-ETDZIIXHEWY5N7GXVC4SH7H5IA",
        "dBSecurityGroups": [],
        "pendingModifiedValues": {
            "masterUserPassword": "****",
            "pendingCloudwatchLogsExports": {
                "logTypesToEnable": [
                    "audit",
                    "error",
                    "general",
                    "slowquery"
                ]
            }
        },
        "dBInstanceStatus": "creating",
        "publiclyAccessible": true,
        "domainMemberships": [],
        "copyTagsToSnapshot": false,
        "dBInstanceIdentifier": "test-instance",
        "licenseModel": "general-public-license",
        "iAMDatabaseAuthenticationEnabled": false,
        "performanceInsightsEnabled": false,
        "vpcSecurityGroups": [
            {
                "status": "active",
                "vpcSecurityGroupId": "sg-f839b688"
            }
        ]
    },
    "requestID": "daf2e3f5-96a3-4df7-a026-863f96db793e",
    "eventID": "797163d3-5726-441d-80a7-6eeb7464acd4",
    "eventType": "AwsApiCall",
    "recipientAccountId": "123456789012"
}
```

As shown in the `userIdentity` element in the preceding example, every event or log entry contains information about who generated the request. The identity information helps you determine the following: 
+ Whether the request was made with root or IAM user credentials.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another AWS service.

For more information about the `userIdentity`, see the [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html). For more information about `CreateDBInstance` and other Amazon RDS actions, see the [Amazon RDS API Reference](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/).

# Monitoring Amazon RDS with Database Activity Streams
<a name="DBActivityStreams"></a><a name="das"></a>

By using Database Activity Streams, you can monitor near real-time streams of database activity.

**Topics**
+ [Overview of Database Activity Streams](#DBActivityStreams.Overview)
+ [Configuring unified auditing for Oracle Database](DBActivityStreams.configuring-auditing.md)
+ [Configuring auditing policy for Amazon RDS for Microsoft SQL Server](DBActivityStreams.configuring-auditing-SQLServer.md)
+ [Starting a database activity stream](DBActivityStreams.Enabling.md)
+ [Modifying a database activity stream for Amazon RDS](DBActivityStreams.Modifying.md)
+ [Getting the status of a database activity stream](DBActivityStreams.Status.md)
+ [Stopping a database activity stream](DBActivityStreams.Disabling.md)
+ [Monitoring database activity streams](DBActivityStreams.Monitoring.md)
+ [IAM policy examples for database activity streams](DBActivityStreams.ManagingAccess.md)

## Overview of Database Activity Streams
<a name="DBActivityStreams.Overview"></a>

As an Amazon RDS database administrator, you need to safeguard your database and meet compliance and regulatory requirements. One strategy is to integrate database activity streams with your monitoring tools. In this way, you monitor and set alarms for auditing activity in your database.

Security threats are both external and internal. To protect against internal threats, you can control administrator access to data streams by configuring the Database Activity Streams feature. Amazon RDS  DBAs don't have access to the collection, transmission, storage, and processing of the streams.

**Contents**
+ [How database activity streams work](#DBActivityStreams.Overview.how-they-work)
+ [Auditing in Oracle Database and Microsoft SQL Server Database](#DBActivityStreams.Overview.auditing)
  + [Unified auditing in Oracle Database](#DBActivityStreams.Overview.unified-auditing)
  + [Auditing in Microsoft SQL Server](#DBActivityStreams.Overview.SQLServer-auditing)
  + [Non-native audit fields for Oracle Database and SQL Server](#DBActivityStreams.Overview.unified-auditing.non-native)
  + [DB parameter group override](#DBActivityStreams.Overview.unified-auditing.parameter-group)
+ [Asynchronous mode for database activity streams](#DBActivityStreams.Overview.sync-mode)
+ [Requirements and limitations for database activity streams](#DBActivityStreams.Overview.requirements)
+ [Region and version availability](#DBActivityStreams.RegionVersionAvailability)
+ [Supported DB instance classes for database activity streams](#DBActivityStreams.Overview.requirements.classes)

### How database activity streams work
<a name="DBActivityStreams.Overview.how-they-work"></a>

Amazon RDS pushes activities to an Amazon Kinesis data stream in near real time. The Kinesis stream is created automatically. From Kinesis, you can configure AWS services such as Amazon Data Firehose and AWS Lambda to consume the stream and store the data.

**Important**  
Use of the database activity streams feature in Amazon RDS is free, but Amazon Kinesis charges for a data stream. For more information, see [Amazon Kinesis Data Streams pricing](https://aws.amazon.com/kinesis/data-streams/pricing/).

You can configure applications for compliance management to consume database activity streams. These applications can use the stream to generate alerts and audit activity on your database.

Amazon RDS supports database activity streams in Multi-AZ deployments. In this case, database activity streams audit both the primary and standby instances.

### Auditing in Oracle Database and Microsoft SQL Server Database
<a name="DBActivityStreams.Overview.auditing"></a>

Auditing is the monitoring and recording of configured database actions. Amazon RDS doesn't capture database activity by default. You create and manage audit policies in your database yourself.

**Topics**
+ [Unified auditing in Oracle Database](#DBActivityStreams.Overview.unified-auditing)
+ [Auditing in Microsoft SQL Server](#DBActivityStreams.Overview.SQLServer-auditing)
+ [Non-native audit fields for Oracle Database and SQL Server](#DBActivityStreams.Overview.unified-auditing.non-native)
+ [DB parameter group override](#DBActivityStreams.Overview.unified-auditing.parameter-group)

#### Unified auditing in Oracle Database
<a name="DBActivityStreams.Overview.unified-auditing"></a>

In an Oracle database, a *unified audit policy* is a named group of audit settings that you can use to audit an aspect of user behavior. A policy can be as simple as auditing the activities of a single user. You can also create complex audit policies that use conditions.

An Oracle database writes audit records, including `SYS` audit records, to the *unified audit trail*. For example, if an error occurs during an `INSERT` statement, standard auditing indicates the error number and the SQL that was run. The audit trail resides in a read-only table in the `AUDSYS` schema. To access these records, query the `UNIFIED_AUDIT_TRAIL` data dictionary view.

Typically, you configure database activity streams as follows:

1. Create an Oracle Database audit policy by using the `CREATE AUDIT POLICY` command.

   The Oracle Database generates audit records.

1. Activate the audit policy by using the `AUDIT POLICY` command.

1. Configure database activity streams.

   Only activities that match the Oracle Database audit policies are captured and sent to the Amazon Kinesis data stream. When database activity streams are enabled, an Oracle database administrator can't alter the audit policy or remove audit logs.

To learn more about unified audit policies, see [About Auditing Activities with Unified Audit Policies and AUDIT](https://docs.oracle.com/en/database/oracle/oracle-database/19/dbseg/configuring-audit-policies.html#GUID-2435D929-10AD-43C7-8A6C-5133170074D0) in the *Oracle Database Security Guide*.

#### Auditing in Microsoft SQL Server
<a name="DBActivityStreams.Overview.SQLServer-auditing"></a>

Database Activity Stream uses SQLAudit feature to audit the SQL Server database.

RDS for SQL Server instance contains the following:
+ Server audit – The SQL server audit collects a single instance of server or database-level actions, and a group of actions to monitor. The server-level audits `RDS_DAS_AUDIT` and `RDS_DAS_AUDIT_CHANGES` are managed by RDS.
+ Server audit specification – The server audit specification records the server-level events. You can modify the `RDS_DAS_SERVER_AUDIT_SPEC` specification. This specification is linked to the server audit `RDS_DAS_AUDIT`. The `RDS_DAS_CHANGES_AUDIT_SPEC` specification is managed by RDS.
+ Database audit specification – The database audit specification records the database-level events. You can create a database audit specification `RDS_DAS_DB_<name>` and link it to `RDS_DAS_AUDIT` server audit.

You can configure database activity streams by using the console or CLI. Typically, you configure database activity streams as follows:

1. (Optional) Create a database audit specification with the `CREATE DATABASE AUDIT SPECIFICATION` command and link it to `RDS_DAS_AUDIT` server audit. 

1. (Optional) Modify the server audit specification with the `ALTER SERVER AUDIT SPECIFICATION` command and define the policies. 

1. Activate the database and server audit policies. For example:

   `ALTER DATABASE AUDIT SPECIFICATION [<Your database specification>] WITH (STATE=ON)`

   `ALTER SERVER AUDIT SPECIFICATION [RDS_DAS_SERVER_AUDIT_SPEC] WITH (STATE=ON)`

1. Configure database activity streams.

   Only activities that match the server and database audit policies are captured and sent to the Amazon Kinesis data stream. When database activity streams are enabled and the policies are locked, a database administrator can't alter the audit policy or remove audit logs. 
**Important**  
If the database audit specification for a specific database is enabled and the policy is in a locked state, then the database can't be dropped.

For more information about SQL Server auditing, see [SQL Server Audit Components](https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-database-engine?view=sql-server-ver16) in the *Microsoft SQL Server documentation*.



#### Non-native audit fields for Oracle Database and SQL Server
<a name="DBActivityStreams.Overview.unified-auditing.non-native"></a>

When you start a database activity stream, every database event generates a corresponding activity stream event. For example, a database user might run `SELECT` and `INSERT` statements. The database audits these events and sends them to an Amazon Kinesis data stream.

The events are represented in the stream as JSON objects. A JSON object contains a `DatabaseActivityMonitoringRecord`, which contains a `databaseActivityEventList` array. Predefined fields in the array include `class`, `clientApplication`, and `command`.

By default, an activity stream doesn't include engine-native audit fields. You can configure Amazon RDS for Oracle and SQL Server so that it includes these extra fields in the `engineNativeAuditFields` JSON object.

In Oracle Database, most events in the unified audit trail map to fields in the RDS data activity stream. For example, the `UNIFIED_AUDIT_TRAIL.SQL_TEXT` field in unified auditing maps to the `commandText` field in a database activity stream. However, Oracle Database audit fields such as `OS_USERNAME` don't map to predefined fields in a database activity stream.

In SQL Server, most of the event's fields that are recorded by the SQLAudit map to the fields in RDS database activity stream. For example, the `code` field from `sys.fn_get_audit_file` in the audit maps to the `commandText` field in a database activity stream. However, SQL Server database audit fields, such as `permission_bitmask`, don’t map to predefined fields in a database activity stream.

For more information about databaseActivityEventList, see [databaseActivityEventList JSON array for database activity streams](DBActivityStreams.AuditLog.databaseActivityEventList.md).

#### DB parameter group override
<a name="DBActivityStreams.Overview.unified-auditing.parameter-group"></a>

Typically, you turn on unified auditing in RDS for Oracle by attaching a parameter group. However, Database Activity Streams require additional configuration. To improve your customer experience, Amazon RDS performs the following:
+ If you activate an activity stream, RDS for Oracle ignores the auditing parameters in the parameter group.
+ If you deactivate an activity stream, RDS for Oracle stops ignoring the auditing parameters.

The database activity stream for SQL Server is independent of any parameters you set in the SQL Audit option.

### Asynchronous mode for database activity streams
<a name="DBActivityStreams.Overview.sync-mode"></a>

Activity streams in Amazon RDS are always asynchronous. When a database session generates an activity stream event, the session returns to normal activities immediately. In the background, Amazon RDS makes the activity stream event into a durable record.

If an error occurs in the background task, Amazon RDS generates an event. This event indicates the beginning and end of any time windows where activity stream event records might have been lost. Asynchronous mode favors database performance over the accuracy of the activity stream.

### Requirements and limitations for database activity streams
<a name="DBActivityStreams.Overview.requirements"></a>

In RDS, database activity streams have the following requirements and limitations:
+ Amazon Kinesis is required for database activity streams.
+ AWS Key Management Service (AWS KMS) is required for database activity streams because they are always encrypted.
+ Applying additional encryption to your Amazon Kinesis data stream is incompatible with database activity streams, which are already encrypted with your AWS KMS key.
+ You create and manage audit policies yourself. Unlike Amazon Aurora, RDS for Oracle doesn't capture database activities by default.
+ You create and manage audit policies or specifications yourself. Unlike Amazon Aurora, Amazon RDS doesn't capture database activities by default.
+ In a Multi-AZ deployment, start the database activity stream only on the primary DB instance. The activity stream audits both the primary and standby DB instances automatically. No additional steps are required during a failover.
+ Renaming a DB instance doesn't create a new Kinesis stream.
+ CDBs aren't supported for RDS for Oracle.
+ Read replicas aren't supported.

### Region and version availability
<a name="DBActivityStreams.RegionVersionAvailability"></a>

Feature availability and support varies across specific versions of each database engine, and across AWS Regions. For more information on version and Region availability with database activity streams, see [Supported Regions and DB engines for database activity streams in Amazon RDS](Concepts.RDS_Fea_Regions_DB-eng.Feature.DBActivityStreams.md).

### Supported DB instance classes for database activity streams
<a name="DBActivityStreams.Overview.requirements.classes"></a>

For RDS for Oracle you can use database activity streams with the following DB instance classes:
+ db.m4.\$1large
+ db.m5.\$1large
+ db.m5d.\$1large
+ db.m6i.\$1large
+ db.r4.\$1large
+ db.r5.\$1large
+ db.r5.\$1large.tpc\$1.mem\$1x
+ db.r5b.\$1large
+ db.r5b.\$1large.tpc\$1.mem\$1x
+ db.r5d.\$1large
+ db.r6i.\$1large
+ db.r6i.\$1large.tpc\$1.mem\$1x
+ db.x2idn.\$1large
+ db.x2iedn.\$1large
+ db.x2iezn.\$1large
+ db.z1d.\$1large

For RDS for SQL Server you can use database activity streams with the following DB instance classes:
+ db.m4.\$1large
+ db.m5.\$1large
+ db.m5d.\$1large
+ db.m6i.\$1large
+ db.r4.\$1large
+ db.r5.\$1large
+ db.r5b.\$1large
+ db.r5d.\$1large
+ db.r6i.\$1large
+ db.x1e.\$1large
+ db.x2iedn.\$1large
+ db.z1d.\$1large

For more information about instance class types, see [DB instance classes](Concepts.DBInstanceClass.md).

# Configuring unified auditing for Oracle Database
<a name="DBActivityStreams.configuring-auditing"></a>

When you configure unified auditing for use with database activity streams, the following situations are possible:
+ Unified auditing isn't configured for your Oracle database.

  In this case, create new policies with the `CREATE AUDIT POLICY` command, then activate them with the `AUDIT POLICY` command. The following example creates and activates a policy to monitor users with specific privileges and roles.

  ```
  CREATE AUDIT POLICY table_pol
  PRIVILEGES CREATE ANY TABLE, DROP ANY TABLE
  ROLES emp_admin, sales_admin;
  
  AUDIT POLICY table_pol;
  ```

  For complete instructions, see [Configuring Audit Policies](https://docs.oracle.com/en/database/oracle/oracle-database/19/dbseg/configuring-audit-policies.html#GUID-22CDB667-5AA2-4051-A262-FBD0236763CB) in the Oracle Database documentation.
+ Unified auditing is configured for your Oracle database.

  When you activate a database activity stream, RDS for Oracle automatically clears existing audit data. It also revokes audit trail privileges. RDS for Oracle can no longer do the following:
  + Purge unified audit trail records.
  + Add, delete, or modify the unified audit policy.
  + Update the last archived timestamp.
**Important**  
We strongly recommend that you back up your audit data before activating a database activity stream.

  For a description of the `UNIFIED_AUDIT_TRAIL` view, see [UNIFIED\$1AUDIT\$1TRAIL](https://docs.oracle.com/database/121/REFRN/GUID-B7CE1C02-2FD4-47D6-80AA-CF74A60CDD1D.htm#REFRN29162). If you have an account with Oracle Support, see [How To Purge The UNIFIED AUDIT TRAIL](https://support.oracle.com/knowledge/Oracle%20Database%20Products/1582627_1.html).

# Configuring auditing policy for Amazon RDS for Microsoft SQL Server
<a name="DBActivityStreams.configuring-auditing-SQLServer"></a>

A SQL Server database instance has the server audit `RDS_DAS_AUDIT`, which is managed by Amazon RDS. You can define the policies to record server events in the server audit specification `RDS_DAS_SERVER_AUDIT_SPEC`. You can create a database audit specification, such as `RDS_DAS_DB_<name>`, and define the policies to record database events. For the list of server and database level audit action groups, see [SQL Server Audit Action Groups and Actions](https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-action-groups-and-actions) in the *Microsoft SQL Server documentation*.

The default server policy monitors only failed logins and changes to any database or server audit specifications for database activity streams.

Limitations for the audit and audit specifications include the following:
+ You can't modify the server or database audit specifications when the database activity stream is in a *locked* state.
+ You can't modify the server audit `RDS_DAS_AUDIT` specification.
+ You can't modify the SQL Server audit `RDS_DAS_CHANGES` or its related server audit specification `RDS_DAS_CHANGES_AUDIT_SPEC`.
+ When creating a database audit specification, you must use the format `RDS_DAS_DB_<name>` for example, `RDS_DAS_DB_databaseActions`.

**Important**  
For smaller instance classes, we recommend that you don't audit all but only the data that is required. This helps to reduce the performance impact of Database Activity Streams on these instance classes.

The following sample code modifies the server audit specification `RDS_DAS_SERVER_AUDIT_SPEC` and audits any logout and successful login actions:

```
ALTER SERVER AUDIT SPECIFICATION [RDS_DAS_SERVER_AUDIT_SPEC]
      WITH (STATE=OFF);
ALTER SERVER AUDIT SPECIFICATION [RDS_DAS_SERVER_AUDIT_SPEC]
      ADD (LOGOUT_GROUP),
      ADD (SUCCESSFUL_LOGIN_GROUP)
      WITH (STATE = ON );
```

The following sample code creates a database audit specification `RDS_DAS_DB_database_spec` and attaches it to the server audit `RDS_DAS_AUDIT`:

```
USE testDB;
CREATE DATABASE AUDIT SPECIFICATION [RDS_DAS_DB_database_spec]
     FOR SERVER AUDIT [RDS_DAS_AUDIT]
     ADD ( INSERT, UPDATE, DELETE  
          ON testTable BY testUser )  
     WITH (STATE = ON);
```

After the audit specifications are configured, make sure that the specifications `RDS_DAS_SERVER_AUDIT_SPEC` and `RDS_DAS_DB_<name>` are set to a state of `ON`. Now they can send the audit data to your database activity stream.

# Starting a database activity stream
<a name="DBActivityStreams.Enabling"></a>

When you start an activity stream for the DB instance, each database activity event that you configured in the audit policy generates an activity stream event. SQL commands such as `CONNECT` and `SELECT` generate access events. SQL commands such as `CREATE` and `INSERT` generate change events.

**Important**  
Turning on an activity stream for an Oracle DB instance clears existing audit data. It also revokes audit trail privileges. When the stream is enabled, RDS for Oracle can no longer do the following:  
Purge unified audit trail records.
Add, delete, or modify the unified audit policy.
Update the last archived time stamp.

------
#### [ Console ]

**To start a database activity stream**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the Amazon RDS database instance on which you want to start an activity stream. In a Multi-AZ deployment, start the stream on only the primary instance. The activity stream audits both the primary and the standby instances.

1. For **Actions**, choose **Start activity stream**. 

   The **Start database activity stream: ***name* window appears, where *name* is your RDS instance.

1. Enter the following settings:
   + For **AWS KMS key**, choose a key from the list of AWS KMS keys.

     Amazon RDS uses the KMS key to encrypt the key that in turn encrypts database activity. Choose a KMS key other than the default key. For more information about encryption keys and AWS KMS, see [What is AWS Key Management Service?](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) in the *AWS Key Management Service Developer Guide.*
   + For **Database activity events**, choose **Enable engine-native audit fields** to include the engine specific audit fields.
   + Choose **Immediately**.

     When you choose **Immediately**, the RDS instance restarts right away. If you choose **During the next maintenance window**, the RDS instance doesn't restart right away. In this case, the database activity stream doesn't start until the next maintenance window.

1. Choose **Start database activity stream**.

   The status for the the database shows that the activity stream is starting.
**Note**  
If you get the error `You can't start a database activity stream in this configuration`, check [Supported DB instance classes for database activity streams](DBActivityStreams.md#DBActivityStreams.Overview.requirements.classes) to see whether your RDS instance is using a supported instance class.

------
#### [ AWS CLI ]

To start database activity streams for a DB instance, configure the database using the [start-activity-stream](https://docs.aws.amazon.com/cli/latest/reference/rds/start-activity-stream.html) AWS CLI command.
+ `--resource-arn arn` – Specifies the Amazon Resource Name (ARN) of the DB instance.
+ `--kms-key-id key` – Specifies the KMS key identifier for encrypting messages in the database activity stream. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the AWS KMS key.
+ `--engine-native-audit-fields-included` – Includes engine-specific auditing fields in the data stream. To exclude these fields, specify `--no-engine-native-audit-fields-included` (default).

The following example starts a database activity stream for a DB instance in asynchronous mode.

For Linux, macOS, or Unix:

```
aws rds start-activity-stream \
    --mode async \
    --kms-key-id my-kms-key-arn \
    --resource-arn my-instance-arn \
    --engine-native-audit-fields-included \
    --apply-immediately
```

For Windows:

```
aws rds start-activity-stream ^
    --mode async ^
    --kms-key-id my-kms-key-arn ^
    --resource-arn my-instance-arn ^
    --engine-native-audit-fields-included ^
    --apply-immediately
```

------
#### [ Amazon RDS API ]

To start database activity streams for a DB instance, configure the instance using the [StartActivityStream](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StartActivityStream.html) operation.

Call the action with the parameters below:
+ `Region`
+ `KmsKeyId`
+ `ResourceArn`
+ `Mode`
+ `EngineNativeAuditFieldsIncluded`

------

# Modifying a database activity stream for Amazon RDS
<a name="DBActivityStreams.Modifying"></a>

You might want to customize your Amazon RDS audit policy when your activity stream is started. If you don't want to lose time and data by stopping your activity stream, you can change the *audit policy state* to either of the following settings:

**Locked (default)**  
The audit policies in your database are read-only.

**Unlocked**  
The audit policies in your database are read/write.

The basic steps are as follows:

1. Modify the audit policy state to unlocked.

1. Customize your audit policy.

1. Modify the audit policy state to locked.

## Console
<a name="DBActivityStreams.Modifying-collapsible-section-E1"></a>

**To modify the audit policy state of your activity stream**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. For **Actions**, choose **Modify database activity stream**. 

   The **Modify database activity stream: *name*** window appears, where *name* is your RDS instance.

1. Choose either of the following options:  
**Locked**  
When you lock your audit policy, it becomes read-only. You can't edit your audit policy unless you unlock the policy or stop the activity stream.  
**Unlocked**  
When you unlock your audit policy, it becomes read/write. You can edit your audit policy while the activity stream is started.

1. Choose **Modify DB activity stream**.

   The status for the Amazon RDS database shows **Configuring activity stream**.

1. (Optional) Choose the DB instance link. Then choose the **Configuration** tab.

   The **Audit policy status** field shows one of the following values:
   + **Locked**
   + **Unlocked**
   + **Locking policy**
   + **Unlocking policy**

## AWS CLI
<a name="DBActivityStreams.Modifying-collapsible-section-E2"></a>

To modify the activity stream state for the database instance, use the [modify-activity-stream](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-activity-stream.html) AWS CLI command.


****  

| Option | Required? | Description | 
| --- | --- | --- | 
|  `--resource-arn my-instance-ARN`  |  Yes  |  The Amazon Resource Name (ARN) of your RDS database instance.  | 
|  `--audit-policy-state`  |  No  |  The new state of the audit policy for the database activity stream on your instance: `locked` or `unlocked`.  | 

The following example unlocks the audit policy for the activity stream started on *my-instance-ARN*.

For Linux, macOS, or Unix:

```
aws rds modify-activity-stream \
    --resource-arn my-instance-ARN \
    --audit-policy-state unlocked
```

For Windows:

```
aws rds modify-activity-stream ^
    --resource-arn my-instance-ARN ^
    --audit-policy-state unlocked
```

The following example describes the instance *my-instance*. The partial sample output shows that the audit policy is unlocked.

```
aws rds describe-db-instances --db-instance-identifier my-instance

{
    "DBInstances": [
        {
            ...
            "Engine": "oracle-ee",
            ...
            "ActivityStreamStatus": "started",
            "ActivityStreamKmsKeyId": "ab12345e-1111-2bc3-12a3-ab1cd12345e",
            "ActivityStreamKinesisStreamName": "aws-rds-das-db-AB1CDEFG23GHIJK4LMNOPQRST",
            "ActivityStreamMode": "async",
            "ActivityStreamEngineNativeAuditFieldsIncluded": true, 
            "ActivityStreamPolicyStatus": "unlocked",
            ...
        }
    ]
}
```

## RDS API
<a name="DBActivityStreams.Modifying-collapsible-section-E3"></a>

To modify the policy state of your database activity stream, use the [ModifyActivityStream](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyActivityStream.html) operation.

Call the action with the parameters below:
+ `AuditPolicyState`
+ `ResourceArn`

# Getting the status of a database activity stream
<a name="DBActivityStreams.Status"></a>

You can get the status of an activity stream for your Amazon RDS database instance using the console or AWS CLI.

## Console
<a name="DBActivityStreams.Status-collapsible-section-S1"></a>

**To get the status of a database activity stream**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance link.

1. Choose the **Configuration** tab, and check **Database activity stream** for status.

## AWS CLI
<a name="DBActivityStreams.Status-collapsible-section-S2"></a>

You can get the activity stream configuration for a database instance as the response to a [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) CLI request.

The following example describes *my-instance*.

```
aws rds --region my-region describe-db-instances --db-instance-identifier my-db
```

The following example shows a JSON response. The following fields are shown:
+ `ActivityStreamKinesisStreamName`
+ `ActivityStreamKmsKeyId`
+ `ActivityStreamStatus`
+ `ActivityStreamMode`
+ `ActivityStreamPolicyStatus`



```
{
    "DBInstances": [
        {
            ...
            "Engine": "oracle-ee",
            ...
            "ActivityStreamStatus": "starting",
            "ActivityStreamKmsKeyId": "ab12345e-1111-2bc3-12a3-ab1cd12345e",
            "ActivityStreamKinesisStreamName": "aws-rds-das-db-AB1CDEFG23GHIJK4LMNOPQRST",
            "ActivityStreamMode": "async",
            "ActivityStreamEngineNativeAuditFieldsIncluded": true, 
            "ActivityStreamPolicyStatus": locked",
            ...
        }
    ]
}
```

## RDS API
<a name="DBActivityStreams.Status-collapsible-section-S3"></a>

You can get the activity stream configuration for a database as the response to a [DescribeDBInstances](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) operation.

# Stopping a database activity stream
<a name="DBActivityStreams.Disabling"></a>

You can stop an activity stream using the console or AWS CLI.

If you delete your Amazon RDS database instance, the activity stream is stopped and the underlying Amazon Kinesis stream is deleted automatically.

## Console
<a name="DBActivityStreams.Disabling-collapsible-section-D1"></a>

**To turn off an activity stream**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose a database that you want to stop the database activity stream for.

1. For **Actions**, choose **Stop activity stream**. The **Database Activity Stream** window appears.

   1. Choose **Immediately**.

      When you choose **Immediately**, the RDS instance restarts right away. If you choose **During the next maintenance window**, the RDS instance doesn't restart right away. In this case, the database activity stream doesn't stop until the next maintenance window.

   1. Choose **Continue**.

## AWS CLI
<a name="DBActivityStreams.Disabling-collapsible-section-D2"></a>

To stop database activity streams for your database, configure the DB instance using the AWS CLI command [stop-activity-stream](https://docs.aws.amazon.com/cli/latest/reference/rds/stop-activity-stream.html). Identify the AWS Region for the DB instance using the `--region` parameter. The `--apply-immediately` parameter is optional.

For Linux, macOS, or Unix:

```
aws rds --region MY_REGION \
    stop-activity-stream \
    --resource-arn MY_DB_ARN \
    --apply-immediately
```

For Windows:

```
aws rds --region MY_REGION ^
    stop-activity-stream ^
    --resource-arn MY_DB_ARN ^
    --apply-immediately
```

## RDS API
<a name="DBActivityStreams.Disabling-collapsible-section-D3"></a>

To stop database activity streams for your database, configure the DB instance using the [StopActivityStream](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StopActivityStream.html) operation. Identify the AWS Region for the DB instance using the `Region` parameter. The `ApplyImmediately` parameter is optional.

# Monitoring database activity streams
<a name="DBActivityStreams.Monitoring"></a>

Database activity streams monitor and report activities. The stream of activity is collected and transmitted to Amazon Kinesis. From Kinesis, you can monitor the activity stream, or other services and applications can consume the activity stream for further analysis. You can find the underlying Kinesis stream name by using the AWS CLI command `describe-db-instances` or the RDS API `DescribeDBInstances` operation.

Amazon RDS manages the Kinesis stream for you as follows:
+ Amazon RDS creates the Kinesis stream automatically with a 24-hour retention period. 
+  Amazon RDS scales the Kinesis stream if necessary. 
+  If you stop the database activity stream or delete the DB instance, Amazon RDS deletes the Kinesis stream. 

The following categories of activity are monitored and put in the activity stream audit log:
+ **SQL commands** – All SQL commands are audited, and also prepared statements, built-in functions, and functions in PL/SQL. Calls to stored procedures are audited. Any SQL statements issued inside stored procedures or functions are also audited.
+ **Other database information** – Activity monitored includes the full SQL statement, the row count of affected rows from DML commands, accessed objects, and the unique database name. Database activity streams also monitor the bind variables and stored procedure parameters. 
**Important**  
The full SQL text of each statement is visible in the activity stream audit log, including any sensitive data. However, database user passwords are redacted if Oracle can determine them from the context, such as in the following SQL statement.   

  ```
  ALTER ROLE role-name WITH password
  ```
+ **Connection information** – Activity monitored includes session and network information, the server process ID, and exit codes.

If an activity stream has a failure while monitoring your DB instance, you are notified through RDS events.

In the following sections, you can access, audit, and process database activity streams.

**Topics**
+ [Accessing an activity stream from Amazon Kinesis](DBActivityStreams.KinesisAccess.md)
+ [Audit log contents and examples for database activity streams](DBActivityStreams.AuditLog.md)
+ [databaseActivityEventList JSON array for database activity streams](DBActivityStreams.AuditLog.databaseActivityEventList.md)
+ [Processing a database activity stream using the AWS SDK](DBActivityStreams.CodeExample.md)

# Accessing an activity stream from Amazon Kinesis
<a name="DBActivityStreams.KinesisAccess"></a>

When you enable an activity stream for a database, a Kinesis stream is created for you. From Kinesis, you can monitor your database activity in real time. To further analyze database activity, you can connect your Kinesis stream to consumer applications. You can also connect the stream to compliance management applications such as IBM's Security Guardium or Imperva's SecureSphere Database Audit and Protection.

You can access your Kinesis stream either from the RDS console or the Kinesis console.

**To access an activity stream from Kinesis using the RDS console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the Amazon RDS database instance on which you started an activity stream.

1. Choose **Configuration**.

1. Under **Database activity stream**, choose the link under **Kinesis stream**.

1. In the Kinesis console, choose **Monitoring** to begin observing the database activity.

**To access an activity stream from Kinesis using the Kinesis console**

1. Open the Kinesis console at [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis).

1. Choose your activity stream from the list of Kinesis streams.

   An activity stream's name includes the prefix `aws-rds-das-db-` followed by the resource ID of the database. The following is an example. 

   ```
   aws-rds-das-db-NHVOV4PCLWHGF52NP
   ```

   To use the Amazon RDS console to find the resource ID for the database, choose your DB instance from the list of databases, and then choose the **Configuration** tab.

   To use the AWS CLI to find the full Kinesis stream name for an activity stream, use a [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) CLI request and note the value of `ActivityStreamKinesisStreamName` in the response.

1. Choose **Monitoring** to begin observing the database activity.

For more information about using Amazon Kinesis, see [What Is Amazon Kinesis Data Streams?](https://docs.aws.amazon.com/streams/latest/dev/introduction.html).

# Audit log contents and examples for database activity streams
<a name="DBActivityStreams.AuditLog"></a>

Monitored events are represented in the database activity stream as JSON strings. The structure consists of a JSON object containing a `DatabaseActivityMonitoringRecord`, which in turn contains a `databaseActivityEventList` array of activity events. 

**Note**  
For database activity streams, the `paramList` JSON array doesn't include null values from Hibernate applications.

**Topics**
+ [Examples of an audit log for an activity stream](#DBActivityStreams.AuditLog.Examples)
+ [DatabaseActivityMonitoringRecords JSON object](#DBActivityStreams.AuditLog.DatabaseActivityMonitoringRecords)
+ [databaseActivityEvents JSON Object](#DBActivityStreams.AuditLog.databaseActivityEvents)

## Examples of an audit log for an activity stream
<a name="DBActivityStreams.AuditLog.Examples"></a>

Following are sample decrypted JSON audit logs of activity event records.

**Example Activity event record of a CONNECT SQL statement**  
The following activity event record shows a login with the use of a `CONNECT` SQL statement (`command`) by a JDBC Thin Client (`clientApplication`) for your Oracle DB.  

```
{
    "class": "Standard",
    "clientApplication": "JDBC Thin Client",
    "command": "LOGON",
    "commandText": null,
    "dbid": "0123456789",
    "databaseName": "ORCL",
    "dbProtocol": "oracle",
    "dbUserName": "TEST",
    "endTime": null,
    "errorMessage": null,
    "exitCode": 0,
    "logTime": "2021-01-15 00:15:36.233787",
    "netProtocol": "tcp",
    "objectName": null,
    "objectType": null,
    "paramList": [],
    "pid": 17904,
    "remoteHost": "123.456.789.012",
    "remotePort": "25440",
    "rowCount": null,
    "serverHost": "987.654.321.098",
    "serverType": "oracle",
    "serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
    "serviceName": "oracle-ee",
    "sessionId": 987654321,
    "startTime": null,
    "statementId": 1,
    "substatementId": null,
    "transactionId": "0000000000000000",
    "engineNativeAuditFields": {
        "UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
        "FGA_POLICY_NAME": null,
        "DV_OBJECT_STATUS": null,
        "SYSTEM_PRIVILEGE_USED": "CREATE SESSION",
        "OLS_LABEL_COMPONENT_TYPE": null,
        "XS_SESSIONID": null,
        "ADDITIONAL_INFO": null,
        "INSTANCE_ID": 1,
        "DBID": 123456789
        "DV_COMMENT": null,
        "RMAN_SESSION_STAMP": null,
        "NEW_NAME": null,
        "DV_ACTION_NAME": null,
        "OLS_PROGRAM_UNIT_NAME": null,
        "OLS_STRING_LABEL": null,
        "RMAN_SESSION_RECID": null,
        "OBJECT_PRIVILEGES": null,
        "OLS_OLD_VALUE": null,
        "XS_TARGET_PRINCIPAL_NAME": null,
        "XS_NS_ATTRIBUTE": null,
        "XS_NS_NAME": null,
        "DBLINK_INFO": null,
        "AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((ADDRESS\u003d(PROTOCOL\u003dtcp)(HOST\u003d205.251.233.183)(PORT\u003d25440))));",
        "OBJECT_EDITION": null,
        "OLS_PRIVILEGES_GRANTED": null,
        "EXCLUDED_USER": null,
        "DV_ACTION_OBJECT_NAME": null,
        "OLS_LABEL_COMPONENT_NAME": null,
        "EXCLUDED_SCHEMA": null,
        "DP_TEXT_PARAMETERS1": null,
        "XS_USER_NAME": null,
        "XS_ENABLED_ROLE": null,
        "XS_NS_ATTRIBUTE_NEW_VAL": null,
        "DIRECT_PATH_NUM_COLUMNS_LOADED": null,
        "AUDIT_OPTION": null,
        "DV_EXTENDED_ACTION_CODE": null,
        "XS_PACKAGE_NAME": null,
        "OLS_NEW_VALUE": null,
        "DV_RETURN_CODE": null,
        "XS_CALLBACK_EVENT_TYPE": null,
        "USERHOST": "a1b2c3d4e5f6.amazon.com",
        "GLOBAL_USERID": null,
        "CLIENT_IDENTIFIER": null,
        "RMAN_OPERATION": null,
        "TERMINAL": "unknown",
        "OS_USERNAME": "sumepate",
        "OLS_MAX_READ_LABEL": null,
        "XS_PROXY_USER_NAME": null,
        "XS_DATASEC_POLICY_NAME": null,
        "DV_FACTOR_CONTEXT": null,
        "OLS_MAX_WRITE_LABEL": null,
        "OLS_PARENT_GROUP_NAME": null,
        "EXCLUDED_OBJECT": null,
        "DV_RULE_SET_NAME": null,
        "EXTERNAL_USERID": null,
        "EXECUTION_ID": null,
        "ROLE": null,
        "PROXY_SESSIONID": 0,
        "DP_BOOLEAN_PARAMETERS1": null,
        "OLS_POLICY_NAME": null,
        "OLS_GRANTEE": null,
        "OLS_MIN_WRITE_LABEL": null,
        "APPLICATION_CONTEXTS": null,
        "XS_SCHEMA_NAME": null,
        "DV_GRANTEE": null,
        "XS_COOKIE": null,
        "DBPROXY_USERNAME": null,
        "DV_ACTION_CODE": null,
        "OLS_PRIVILEGES_USED": null,
        "RMAN_DEVICE_TYPE": null,
        "XS_NS_ATTRIBUTE_OLD_VAL": null,
        "TARGET_USER": null,
        "XS_ENTITY_TYPE": null,
        "ENTRY_ID": 1,
        "XS_PROCEDURE_NAME": null,
        "XS_INACTIVITY_TIMEOUT": null,
        "RMAN_OBJECT_TYPE": null,
        "SYSTEM_PRIVILEGE": null,
        "NEW_SCHEMA": null,
        "SCN": 5124715
    }
}
```
The following activity event record shows a login failure for your SQL Server DB.  

```
{
    "type": "DatabaseActivityMonitoringRecord",
    "clusterId": "",
    "instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
    "databaseActivityEventList": [
        {
            "class": "LOGIN",
            "clientApplication": "Microsoft SQL Server Management Studio",
            "command": "LOGIN FAILED",
            "commandText": "Login failed for user 'test'. Reason: Password did not match that for the login provided. [CLIENT: local-machine]",
            "databaseName": "",
            "dbProtocol": "SQLSERVER",
            "dbUserName": "test",
            "endTime": null,
            "errorMessage": null,
            "exitCode": 0,
            "logTime": "2022-10-06 21:34:42.7113072+00",
            "netProtocol": null,
            "objectName": "",
            "objectType": "LOGIN",
            "paramList": null,
            "pid": null,
            "remoteHost": "local machine",
            "remotePort": null,
            "rowCount": 0,
            "serverHost": "172.31.30.159",
            "serverType": "SQLSERVER",
            "serverVersion": "15.00.4073.23.v1.R1",
            "serviceName": "sqlserver-ee",
            "sessionId": 0,
            "startTime": null,
            "statementId": "0x1eb0d1808d34a94b9d3dcf5432750f02",
            "substatementId": 1,
            "transactionId": "0",
            "type": "record",
            "engineNativeAuditFields": {
                "target_database_principal_id": 0,
                "target_server_principal_id": 0,
                "target_database_principal_name": "",
                "server_principal_id": 0,
                "user_defined_information": "",
                "response_rows": 0,
                "database_principal_name": "",
                "target_server_principal_name": "",
                "schema_name": "",
                "is_column_permission": false,
                "object_id": 0,
                "server_instance_name": "EC2AMAZ-NFUJJNO",
                "target_server_principal_sid": null,
                "additional_information": "<action_info "xmlns=\"http://schemas.microsoft.com/sqlserver/2008/sqlaudit_data\"><pooled_connection>0</pooled_connection><error>0x00004818</error><state>8</state><address>local machine</address><PasswordFirstNibbleHash>B</PasswordFirstNibbleHash></action_info>"-->,
                "duration_milliseconds": 0,
                "permission_bitmask": "0x00000000000000000000000000000000",
                "data_sensitivity_information": "",
                "session_server_principal_name": "",
                "connection_id": "98B4F537-0F82-49E3-AB08-B9D33B5893EF",
                "audit_schema_version": 1,
                "database_principal_id": 0,
                "server_principal_sid": null,
                "user_defined_event_id": 0,
                "host_name": "EC2AMAZ-NFUJJNO"
            }
        }
    ]
}
```
If a database activity stream isn't enabled, then the last field in the JSON document is `"engineNativeAuditFields": { }`. 

**Example Activity event record of a CREATE TABLE statement**  
The following example shows a `CREATE TABLE` event for your Oracle database.  

```
{
    "class": "Standard",
    "clientApplication": "sqlplus@ip-12-34-5-678 (TNS V1-V3)",
    "command": "CREATE TABLE",
    "commandText": "CREATE TABLE persons(\n    person_id NUMBER GENERATED BY DEFAULT AS IDENTITY,\n    first_name VARCHAR2(50) NOT NULL,\n    last_name VARCHAR2(50) NOT NULL,\n    PRIMARY KEY(person_id)\n)",
    "dbid": "0123456789",
    "databaseName": "ORCL",
    "dbProtocol": "oracle",
    "dbUserName": "TEST",
    "endTime": null,
    "errorMessage": null,
    "exitCode": 0,
    "logTime": "2021-01-15 00:22:49.535239",
    "netProtocol": "beq",
    "objectName": "PERSONS",
    "objectType": "TEST",
    "paramList": [],
    "pid": 17687,
    "remoteHost": "123.456.789.0",
    "remotePort": null,
    "rowCount": null,
    "serverHost": "987.654.321.01",
    "serverType": "oracle",
    "serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
    "serviceName": "oracle-ee",
    "sessionId": 1234567890,
    "startTime": null,
    "statementId": 43,
    "substatementId": null,
    "transactionId": "090011007F0D0000",
    "engineNativeAuditFields": {
        "UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
        "FGA_POLICY_NAME": null,
        "DV_OBJECT_STATUS": null,
        "SYSTEM_PRIVILEGE_USED": "CREATE SEQUENCE, CREATE TABLE",
        "OLS_LABEL_COMPONENT_TYPE": null,
        "XS_SESSIONID": null,
        "ADDITIONAL_INFO": null,
        "INSTANCE_ID": 1,
        "DV_COMMENT": null,
        "RMAN_SESSION_STAMP": null,
        "NEW_NAME": null,
        "DV_ACTION_NAME": null,
        "OLS_PROGRAM_UNIT_NAME": null,
        "OLS_STRING_LABEL": null,
        "RMAN_SESSION_RECID": null,
        "OBJECT_PRIVILEGES": null,
        "OLS_OLD_VALUE": null,
        "XS_TARGET_PRINCIPAL_NAME": null,
        "XS_NS_ATTRIBUTE": null,
        "XS_NS_NAME": null,
        "DBLINK_INFO": null,
        "AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((PROTOCOL\u003dbeq)(HOST\u003d123.456.789.0)));",
        "OBJECT_EDITION": null,
        "OLS_PRIVILEGES_GRANTED": null,
        "EXCLUDED_USER": null,
        "DV_ACTION_OBJECT_NAME": null,
        "OLS_LABEL_COMPONENT_NAME": null,
        "EXCLUDED_SCHEMA": null,
        "DP_TEXT_PARAMETERS1": null,
        "XS_USER_NAME": null,
        "XS_ENABLED_ROLE": null,
        "XS_NS_ATTRIBUTE_NEW_VAL": null,
        "DIRECT_PATH_NUM_COLUMNS_LOADED": null,
        "AUDIT_OPTION": null,
        "DV_EXTENDED_ACTION_CODE": null,
        "XS_PACKAGE_NAME": null,
        "OLS_NEW_VALUE": null,
        "DV_RETURN_CODE": null,
        "XS_CALLBACK_EVENT_TYPE": null,
        "USERHOST": "ip-10-13-0-122",
        "GLOBAL_USERID": null,
        "CLIENT_IDENTIFIER": null,
        "RMAN_OPERATION": null,
        "TERMINAL": "pts/1",
        "OS_USERNAME": "rdsdb",
        "OLS_MAX_READ_LABEL": null,
        "XS_PROXY_USER_NAME": null,
        "XS_DATASEC_POLICY_NAME": null,
        "DV_FACTOR_CONTEXT": null,
        "OLS_MAX_WRITE_LABEL": null,
        "OLS_PARENT_GROUP_NAME": null,
        "EXCLUDED_OBJECT": null,
        "DV_RULE_SET_NAME": null,
        "EXTERNAL_USERID": null,
        "EXECUTION_ID": null,
        "ROLE": null,
        "PROXY_SESSIONID": 0,
        "DP_BOOLEAN_PARAMETERS1": null,
        "OLS_POLICY_NAME": null,
        "OLS_GRANTEE": null,
        "OLS_MIN_WRITE_LABEL": null,
        "APPLICATION_CONTEXTS": null,
        "XS_SCHEMA_NAME": null,
        "DV_GRANTEE": null,
        "XS_COOKIE": null,
        "DBPROXY_USERNAME": null,
        "DV_ACTION_CODE": null,
        "OLS_PRIVILEGES_USED": null,
        "RMAN_DEVICE_TYPE": null,
        "XS_NS_ATTRIBUTE_OLD_VAL": null,
        "TARGET_USER": null,
        "XS_ENTITY_TYPE": null,
        "ENTRY_ID": 12,
        "XS_PROCEDURE_NAME": null,
        "XS_INACTIVITY_TIMEOUT": null,
        "RMAN_OBJECT_TYPE": null,
        "SYSTEM_PRIVILEGE": null,
        "NEW_SCHEMA": null,
        "SCN": 5133083
    }
}
```
The following example shows a `CREATE TABLE` event for your SQL Server database.  

```
{
    "type": "DatabaseActivityMonitoringRecord",
    "clusterId": "",
    "instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
    "databaseActivityEventList": [
        {
            "class": "SCHEMA",
            "clientApplication": "Microsoft SQL Server Management Studio - Query",
            "command": "ALTER",
            "commandText": "Create table [testDB].[dbo].[TestTable2](\r\ntextA varchar(6000),\r\n    textB varchar(6000)\r\n)",
            "databaseName": "testDB",
            "dbProtocol": "SQLSERVER",
            "dbUserName": "test",
            "endTime": null,
            "errorMessage": null,
            "exitCode": 1,
            "logTime": "2022-10-06 21:44:38.4120677+00",
            "netProtocol": null,
            "objectName": "dbo",
            "objectType": "SCHEMA",
            "paramList": null,
            "pid": null,
            "remoteHost": "local machine",
            "remotePort": null,
            "rowCount": 0,
            "serverHost": "172.31.30.159",
            "serverType": "SQLSERVER",
            "serverVersion": "15.00.4073.23.v1.R1",
            "serviceName": "sqlserver-ee",
            "sessionId": 84,
            "startTime": null,
            "statementId": "0x5178d33d56e95e419558b9607158a5bd",
            "substatementId": 1,
            "transactionId": "4561864",
            "type": "record",
            "engineNativeAuditFields": {
                "target_database_principal_id": 0,
                "target_server_principal_id": 0,
                "target_database_principal_name": "",
                "server_principal_id": 2,
                "user_defined_information": "",
                "response_rows": 0,
                "database_principal_name": "dbo",
                "target_server_principal_name": "",
                "schema_name": "",
                "is_column_permission": false,
                "object_id": 1,
                "server_instance_name": "EC2AMAZ-NFUJJNO",
                "target_server_principal_sid": null,
                "additional_information": "",
                "duration_milliseconds": 0,
                "permission_bitmask": "0x00000000000000000000000000000000",
                "data_sensitivity_information": "",
                "session_server_principal_name": "test",
                "connection_id": "EE1FE3FD-EF2C-41FD-AF45-9051E0CD983A",
                "audit_schema_version": 1,
                "database_principal_id": 1,
                "server_principal_sid": "0x010500000000000515000000bdc2795e2d0717901ba6998cf4010000",
                "user_defined_event_id": 0,
                "host_name": "EC2AMAZ-NFUJJNO"
            }
        }
    ]
}
```

**Example Activity event record of a SELECT statement**  
The following example shows a `SELECT` event for your Oracle DB.  

```
{
    "class": "Standard",
    "clientApplication": "sqlplus@ip-12-34-5-678 (TNS V1-V3)",
    "command": "SELECT",
    "commandText": "select count(*) from persons",
    "databaseName": "1234567890",
    "dbProtocol": "oracle",
    "dbUserName": "TEST",
    "endTime": null,
    "errorMessage": null,
    "exitCode": 0,
    "logTime": "2021-01-15 00:25:18.850375",
    "netProtocol": "beq",
    "objectName": "PERSONS",
    "objectType": "TEST",
    "paramList": [],
    "pid": 17687,
    "remoteHost": "123.456.789.0",
    "remotePort": null,
    "rowCount": null,
    "serverHost": "987.654.321.09",
    "serverType": "oracle",
    "serverVersion": "19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3",
    "serviceName": "oracle-ee",
    "sessionId": 1080639707,
    "startTime": null,
    "statementId": 44,
    "substatementId": null,
    "transactionId": null,
    "engineNativeAuditFields": {
        "UNIFIED_AUDIT_POLICIES": "TEST_POL_EVERYTHING",
        "FGA_POLICY_NAME": null,
        "DV_OBJECT_STATUS": null,
        "SYSTEM_PRIVILEGE_USED": null,
        "OLS_LABEL_COMPONENT_TYPE": null,
        "XS_SESSIONID": null,
        "ADDITIONAL_INFO": null,
        "INSTANCE_ID": 1,
        "DV_COMMENT": null,
        "RMAN_SESSION_STAMP": null,
        "NEW_NAME": null,
        "DV_ACTION_NAME": null,
        "OLS_PROGRAM_UNIT_NAME": null,
        "OLS_STRING_LABEL": null,
        "RMAN_SESSION_RECID": null,
        "OBJECT_PRIVILEGES": null,
        "OLS_OLD_VALUE": null,
        "XS_TARGET_PRINCIPAL_NAME": null,
        "XS_NS_ATTRIBUTE": null,
        "XS_NS_NAME": null,
        "DBLINK_INFO": null,
        "AUTHENTICATION_TYPE": "(TYPE\u003d(DATABASE));(CLIENT ADDRESS\u003d((PROTOCOL\u003dbeq)(HOST\u003d123.456.789.0)));",
        "OBJECT_EDITION": null,
        "OLS_PRIVILEGES_GRANTED": null,
        "EXCLUDED_USER": null,
        "DV_ACTION_OBJECT_NAME": null,
        "OLS_LABEL_COMPONENT_NAME": null,
        "EXCLUDED_SCHEMA": null,
        "DP_TEXT_PARAMETERS1": null,
        "XS_USER_NAME": null,
        "XS_ENABLED_ROLE": null,
        "XS_NS_ATTRIBUTE_NEW_VAL": null,
        "DIRECT_PATH_NUM_COLUMNS_LOADED": null,
        "AUDIT_OPTION": null,
        "DV_EXTENDED_ACTION_CODE": null,
        "XS_PACKAGE_NAME": null,
        "OLS_NEW_VALUE": null,
        "DV_RETURN_CODE": null,
        "XS_CALLBACK_EVENT_TYPE": null,
        "USERHOST": "ip-12-34-5-678",
        "GLOBAL_USERID": null,
        "CLIENT_IDENTIFIER": null,
        "RMAN_OPERATION": null,
        "TERMINAL": "pts/1",
        "OS_USERNAME": "rdsdb",
        "OLS_MAX_READ_LABEL": null,
        "XS_PROXY_USER_NAME": null,
        "XS_DATASEC_POLICY_NAME": null,
        "DV_FACTOR_CONTEXT": null,
        "OLS_MAX_WRITE_LABEL": null,
        "OLS_PARENT_GROUP_NAME": null,
        "EXCLUDED_OBJECT": null,
        "DV_RULE_SET_NAME": null,
        "EXTERNAL_USERID": null,
        "EXECUTION_ID": null,
        "ROLE": null,
        "PROXY_SESSIONID": 0,
        "DP_BOOLEAN_PARAMETERS1": null,
        "OLS_POLICY_NAME": null,
        "OLS_GRANTEE": null,
        "OLS_MIN_WRITE_LABEL": null,
        "APPLICATION_CONTEXTS": null,
        "XS_SCHEMA_NAME": null,
        "DV_GRANTEE": null,
        "XS_COOKIE": null,
        "DBPROXY_USERNAME": null,
        "DV_ACTION_CODE": null,
        "OLS_PRIVILEGES_USED": null,
        "RMAN_DEVICE_TYPE": null,
        "XS_NS_ATTRIBUTE_OLD_VAL": null,
        "TARGET_USER": null,
        "XS_ENTITY_TYPE": null,
        "ENTRY_ID": 13,
        "XS_PROCEDURE_NAME": null,
        "XS_INACTIVITY_TIMEOUT": null,
        "RMAN_OBJECT_TYPE": null,
        "SYSTEM_PRIVILEGE": null,
        "NEW_SCHEMA": null,
        "SCN": 5136972
    }
}
```
The following example shows a `SELECT` event for your SQL Server DB.  

```
{
    "type": "DatabaseActivityMonitoringRecord",
    "clusterId": "",
    "instanceId": "db-4JCWQLUZVFYP7DIWP6JVQ77O3Q",
    "databaseActivityEventList": [
        {
            "class": "TABLE",
            "clientApplication": "Microsoft SQL Server Management Studio - Query",
            "command": "SELECT",
            "commandText": "select * from [testDB].[dbo].[TestTable]",
            "databaseName": "testDB",
            "dbProtocol": "SQLSERVER",
            "dbUserName": "test",
            "endTime": null,
            "errorMessage": null,
            "exitCode": 1,
            "logTime": "2022-10-06 21:24:59.9422268+00",
            "netProtocol": null,
            "objectName": "TestTable",
            "objectType": "TABLE",
            "paramList": null,
            "pid": null,
            "remoteHost": "local machine",
            "remotePort": null,
            "rowCount": 0,
            "serverHost": "172.31.30.159",
            "serverType": "SQLSERVER",
            "serverVersion": "15.00.4073.23.v1.R1",
            "serviceName": "sqlserver-ee",
            "sessionId": 62,
            "startTime": null,
            "statementId": "0x03baed90412f564fad640ebe51f89b99",
            "substatementId": 1,
            "transactionId": "4532935",
            "type": "record",
            "engineNativeAuditFields": {
                "target_database_principal_id": 0,
                "target_server_principal_id": 0,
                "target_database_principal_name": "",
                "server_principal_id": 2,
                "user_defined_information": "",
                "response_rows": 0,
                "database_principal_name": "dbo",
                "target_server_principal_name": "",
                "schema_name": "dbo",
                "is_column_permission": true,
                "object_id": 581577110,
                "server_instance_name": "EC2AMAZ-NFUJJNO",
                "target_server_principal_sid": null,
                "additional_information": "",
                "duration_milliseconds": 0,
                "permission_bitmask": "0x00000000000000000000000000000001",
                "data_sensitivity_information": "",
                "session_server_principal_name": "test",
                "connection_id": "AD3A5084-FB83-45C1-8334-E923459A8109",
                "audit_schema_version": 1,
                "database_principal_id": 1,
                "server_principal_sid": "0x010500000000000515000000bdc2795e2d0717901ba6998cf4010000",
                "user_defined_event_id": 0,
                "host_name": "EC2AMAZ-NFUJJNO"
            }
        }
    ]
}
```

## DatabaseActivityMonitoringRecords JSON object
<a name="DBActivityStreams.AuditLog.DatabaseActivityMonitoringRecords"></a>

The database activity event records are in a JSON object that contains the following information.


****  

| JSON Field | Data Type | Description | 
| --- | --- | --- | 
|  `type`  | string |  The type of JSON record. The value is `DatabaseActivityMonitoringRecords`.  | 
| version | string |  The version of the database activity monitoring records. Oracle DB uses version 1.3 and SQL Server uses version 1.4. These engine versions introduce the engineNativeAuditFields JSON object.  | 
|  [databaseActivityEvents](#DBActivityStreams.AuditLog.databaseActivityEvents)  | string |  A JSON object that contains the activity events.  | 
| key | string | An encryption key that you use to decrypt the [databaseActivityEventList JSON array](DBActivityStreams.AuditLog.databaseActivityEventList.md)  | 

## databaseActivityEvents JSON Object
<a name="DBActivityStreams.AuditLog.databaseActivityEvents"></a>

The `databaseActivityEvents` JSON object contains the following information.

### Top-level fields in JSON record
<a name="DBActivityStreams.AuditLog.topLevel"></a>

 Each event in the audit log is wrapped inside a record in JSON format. This record contains the following fields. 

**type**  
 This field always has the value `DatabaseActivityMonitoringRecords`. 

**version**  
 This field represents the version of the database activity stream data protocol or contract. It defines which fields are available.

**databaseActivityEvents**  
 An encrypted string representing one or more activity events. It's represented as a base64 byte array. When you decrypt the string, the result is a record in JSON format with fields as shown in the examples in this section.

**key**  
 The encrypted data key used to encrypt the `databaseActivityEvents` string. This is the same AWS KMS key that you provided when you started the database activity stream.

 The following example shows the format of this record.

```
{
  "type":"DatabaseActivityMonitoringRecords",
  "version":"1.3",
  "databaseActivityEvents":"encrypted audit records",
  "key":"encrypted key"
}
```

```
           "type":"DatabaseActivityMonitoringRecords",
           "version":"1.4",
           "databaseActivityEvents":"encrypted audit records",
           "key":"encrypted key"
```

Take the following steps to decrypt the contents of the `databaseActivityEvents` field:

1.  Decrypt the value in the `key` JSON field using the KMS key you provided when starting database activity stream. Doing so returns the data encryption key in clear text. 

1.  Base64-decode the value in the `databaseActivityEvents` JSON field to obtain the ciphertext, in binary format, of the audit payload. 

1.  Decrypt the binary ciphertext with the data encryption key that you decoded in the first step. 

1.  Decompress the decrypted payload. 
   +  The encrypted payload is in the `databaseActivityEvents` field. 
   +  The `databaseActivityEventList` field contains an array of audit records. The `type` fields in the array can be `record` or `heartbeat`. 

The audit log activity event record is a JSON object that contains the following information.


****  

| JSON Field | Data Type | Description | 
| --- | --- | --- | 
|  `type`  | string |  The type of JSON record. The value is `DatabaseActivityMonitoringRecord`.  | 
| instanceId | string | The DB instance resource identifier. It corresponds to the DB instance attribute DbiResourceId. | 
|  [databaseActivityEventList JSON array](DBActivityStreams.AuditLog.databaseActivityEventList.md)   | string |  An array of activity audit records or heartbeat messages.  | 

# databaseActivityEventList JSON array for database activity streams
<a name="DBActivityStreams.AuditLog.databaseActivityEventList"></a>

The audit log payload is an encrypted `databaseActivityEventList` JSON array. The following table lists alphabetically the fields for each activity event in the decrypted `DatabaseActivityEventList` array of an audit log. 

When unified auditing is enabled in Oracle Database, the audit records are populated in this new audit trail. The `UNIFIED_AUDIT_TRAIL` view displays audit records in tabular form by retrieving the audit records from the audit trail. When you start a database activity stream, a column in `UNIFIED_AUDIT_TRAIL` maps to a field in the `databaseActivityEventList` array.

**Important**  
The event structure is subject to change. Amazon RDS might add new fields to activity events in the future. In applications that parse the JSON data, make sure that your code can ignore or take appropriate actions for unknown field names. 

## databaseActivityEventList fields for Amazon RDS for Oracle
<a name="DBActivityStreams.AuditLog.databaseActivityEventList.ro"></a>

The following are `databaseActivityEventList` fields for Amazon RDS for Oracle.


| Field | Data Type | Source | Description | 
| --- | --- | --- | --- | 
|  `class`  |  string  |  `AUDIT_TYPE` column in `UNIFIED_AUDIT_TRAIL`  |  The class of activity event. This corresponds to the `AUDIT_TYPE` column in the `UNIFIED_AUDIT_TRAIL` view. Valid values for Amazon RDS for Oracle are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html) For more information, see [UNIFIED\$1AUDIT\$1TRAIL](https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/UNIFIED_AUDIT_TRAIL.html#GUID-B7CE1C02-2FD4-47D6-80AA-CF74A60CDD1D) in the Oracle documentation.  | 
|  `clientApplication`  |  string  |  `CLIENT_PROGRAM_NAME` in `UNIFIED_AUDIT_TRAIL`  |  The application the client used to connect as reported by the client. The client doesn't have to provide this information, so the value can be null. A sample value is `JDBC Thin Client`.  | 
|  `command`  |  string  |  `ACTION_NAME` column in `UNIFIED_AUDIT_TRAIL`  |  Name of the action executed by the user. To understand the complete action, read both the command name and the `AUDIT_TYPE` value. A sample value is `ALTER DATABASE`.  | 
|  `commandText`  |  string  |  `SQL_TEXT` column in `UNIFIED_AUDIT_TRAIL`  |  The SQL statement associated with the event. A sample value is `ALTER DATABASE BEGIN BACKUP`.  | 
|  `databaseName`  |  string  |  `NAME` column in `V$DATABASE`  |  The name of the database.  | 
|  `dbid`  |  number  |  `DBID` column in `UNIFIED_AUDIT_TRAIL`  |  Numerical identifier for the database. A sample value is `1559204751`.  | 
|  `dbProtocol`  |  string  |  N/A  |  The database protocol. In this beta, the value is `oracle`.  | 
|  `dbUserName`  |  string  |  `DBUSERNAME` column in `UNIFIED_AUDIT_TRAIL`  |  Name of the database user whose actions were audited. A sample value is `RDSADMIN`.  | 
|  `endTime`  |  string  |  N/A  |  This field isn't used for RDS for Oracle and is always null.  | 
|  `engineNativeAuditFields`  |  object  |  `UNIFIED_AUDIT_TRAIL`  |  By default, this object is empty. When you start the activity stream with the `--engine-native-audit-fields-included` option, this object includes the following columns and their values: <pre>ADDITIONAL_INFO<br />APPLICATION_CONTEXTS<br />AUDIT_OPTION<br />AUTHENTICATION_TYPE<br />CLIENT_IDENTIFIER<br />CURRENT_USER<br />DBLINK_INFO<br />DBPROXY_USERNAME<br />DIRECT_PATH_NUM_COLUMNS_LOADED<br />DP_BOOLEAN_PARAMETERS1<br />DP_TEXT_PARAMETERS1<br />DV_ACTION_CODE<br />DV_ACTION_NAME<br />DV_ACTION_OBJECT_NAME<br />DV_COMMENT<br />DV_EXTENDED_ACTION_CODE<br />DV_FACTOR_CONTEXT<br />DV_GRANTEE<br />DV_OBJECT_STATUS<br />DV_RETURN_CODE<br />DV_RULE_SET_NAME<br />ENTRY_ID<br />EXCLUDED_OBJECT<br />EXCLUDED_SCHEMA<br />EXCLUDED_USER<br />EXECUTION_ID<br />EXTERNAL_USERID<br />FGA_POLICY_NAME<br />GLOBAL_USERID<br />INSTANCE_ID<br />KSACL_SERVICE_NAME<br />KSACL_SOURCE_LOCATION<br />KSACL_USER_NAME<br />NEW_NAME<br />NEW_SCHEMA<br />OBJECT_EDITION<br />OBJECT_PRIVILEGES<br />OLS_GRANTEE<br />OLS_LABEL_COMPONENT_NAME<br />OLS_LABEL_COMPONENT_TYPE<br />OLS_MAX_READ_LABEL<br />OLS_MAX_WRITE_LABEL<br />OLS_MIN_WRITE_LABEL<br />OLS_NEW_VALUE<br />OLS_OLD_VALUE<br />OLS_PARENT_GROUP_NAME<br />OLS_POLICY_NAME<br />OLS_PRIVILEGES_GRANTED<br />OLS_PRIVILEGES_USED<br />OLS_PROGRAM_UNIT_NAME<br />OLS_STRING_LABEL<br />OS_USERNAME<br />PROTOCOL_ACTION_NAME<br />PROTOCOL_MESSAGE<br />PROTOCOL_RETURN_CODE<br />PROTOCOL_SESSION_ID<br />PROTOCOL_USERHOST<br />PROXY_SESSIONID<br />RLS_INFO<br />RMAN_DEVICE_TYPE<br />RMAN_OBJECT_TYPE<br />RMAN_OPERATION<br />RMAN_SESSION_RECID<br />RMAN_SESSION_STAMP<br />ROLE<br />SCN<br />SYSTEM_PRIVILEGE<br />SYSTEM_PRIVILEGE_USED<br />TARGET_USER<br />TERMINAL<br />UNIFIED_AUDIT_POLICIES<br />USERHOST<br />XS_CALLBACK_EVENT_TYPE<br />XS_COOKIE<br />XS_DATASEC_POLICY_NAME<br />XS_ENABLED_ROLE<br />XS_ENTITY_TYPE<br />XS_INACTIVITY_TIMEOUT<br />XS_NS_ATTRIBUTE<br />XS_NS_ATTRIBUTE_NEW_VAL<br />XS_NS_ATTRIBUTE_OLD_VAL<br />XS_NS_NAME<br />XS_PACKAGE_NAME<br />XS_PROCEDURE_NAME<br />XS_PROXY_USER_NAME<br />XS_SCHEMA_NAME<br />XS_SESSIONID<br />XS_TARGET_PRINCIPAL_NAME<br />XS_USER_NAME</pre> For more information, see [UNIFIED\$1AUDIT\$1TRAIL](https://docs.oracle.com/database/121/REFRN/GUID-B7CE1C02-2FD4-47D6-80AA-CF74A60CDD1D.htm#REFRN29162) in the Oracle Database documentation.  | 
|  `errorMessage`  |  string  |  N/A  |  This field isn't used for RDS for Oracle and is always null.  | 
|  `exitCode`  |  number  |  `RETURN_CODE` column in `UNIFIED_AUDIT_TRAIL`  |  Oracle Database error code generated by the action. If the action succeeded, the value is `0`.  | 
|  `logTime`  |  string  |  `EVENT_TIMESTAMP_UTC` column in `UNIFIED_AUDIT_TRAIL`  |  Timestamp of the creation of the audit trail entry. A sample value is `2020-11-27 06:56:14.981404`.  | 
|  `netProtocol`  |  string  |  `AUTHENTICATION_TYPE` column in `UNIFIED_AUDIT_TRAIL`  |  The network communication protocol. A sample value is `TCP`.  | 
|  `objectName`  |  string  |  `OBJECT_NAME` column in `UNIFIED_AUDIT_TRAIL`  |  The name of the object affected by the action. A sample value is `employees`.  | 
|  `objectType`  |  string  |  `OBJECT_SCHEMA` column in `UNIFIED_AUDIT_TRAIL`  |  The schema name of object affected by the action. A sample value is `hr`.  | 
|  `paramList`  |  list  |  `SQL_BINDS` column in `UNIFIED_AUDIT_TRAIL`  |  The list of bind variables, if any, associated with `SQL_TEXT`. A sample value is `parameter_1,parameter_2`.  | 
|  `pid`  |  number  |  `OS_PROCESS` column in `UNIFIED_AUDIT_TRAIL`  |  Operating system process identifier of the Oracle database process. A sample value is `22396`.  | 
|  `remoteHost`  |  string  |  `AUTHENTICATION_TYPE` column in `UNIFIED_AUDIT_TRAIL`  |  Either the client IP address or name of the host from which the session was spawned. A sample value is `123.456.789.123`.  | 
|  `remotePort`  |  string  |  `AUTHENTICATION_TYPE` column in `UNIFIED_AUDIT_TRAIL`  |  The client port number. A typical value in Oracle Database environments is `1521`.  | 
|  `rowCount`  |  number  |  N/A  |  This field isn't used for RDS for Oracle and is always null.  | 
|  `serverHost`  |  string  |  Database host  |  The IP address of the database server host. A sample value is `123.456.789.123`.  | 
|  `serverType`  |  string  |  N/A  |  The database server type. The value is always `ORACLE`.  | 
|  `serverVersion`  |  string  |  Database host  |  The Amazon RDS for Oracle version, Release Update (RU), and Release Update Revision (RUR). A sample value is `19.0.0.0.ru-2020-01.rur-2020-01.r1.EE.3`.  | 
|  `serviceName`  |  string  |  Database host  |  The name of the service. A sample value is `oracle-ee`.   | 
|  `sessionId`  |  number  |  `SESSIONID` column in `UNIFIED_AUDIT_TRAIL`  |  The session identifier of the audit. An example is `1894327130`.  | 
|  `startTime`  |  string  |  N/A  |  This field isn't used for RDS for Oracle and is always null.  | 
|  `statementId`  |  number  |  `STATEMENT_ID` column in `UNIFIED_AUDIT_TRAIL`  |  Numeric ID for each statement run. A statement can cause many actions. A sample value is `142197`.  | 
|  `substatementId`  |  N/A  |  N/A  |  This field isn't used for RDS for Oracle and is always null.  | 
|  `transactionId`  |  string  |  `TRANSACTION_ID` column in `UNIFIED_AUDIT_TRAIL`  |  The identifier of the transaction in which the object is modified. A sample value is `02000800D5030000`.  | 

## databaseActivityEventList fields for Amazon RDS for SQL Server
<a name="DBActivityStreams.AuditLog.databaseActivityEventList.rss"></a>

The following are `databaseActivityEventList` fields for Amazon RDS for SQL Server.


| Field | Data Type | Source | Description | 
| --- | --- | --- | --- | 
|  `class`  |  string  |  ` sys.fn_get_audit_file.class_type` mapped to `sys.dm_audit_class_type_map.class_type_desc`  |  The class of activity event. For more information, see [SQL Server Audit (Database Engine)](https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-database-engine?view=sql-server-ver16) in the Microsoft documentation.  | 
|  `clientApplication`  |  string  |  `sys.fn_get_audit_file.application_name`  |  The application that the client connects as reported by the client (SQL Server version 14 and higher). This field is null in SQL Server version 13.  | 
|  `command`  |  string  |  `sys.fn_get_audit_file.action_id` mapped to `sys.dm_audit_actions.name`  |  The general category of the SQL statement. The value for this field depends on the value of the class.  | 
|  `commandText`  |  string  |  `sys.fn_get_audit_file.statement`  |  This field indicates the SQL statement.  | 
|  `databaseName`  |  string  |  `sys.fn_get_audit_file.database_name`  |  Name of the database.  | 
|  `dbProtocol`  |  string  |  N/A  |  The database protocol. This value is `SQLSERVER`.  | 
|  `dbUserName`  |  string  |  `sys.fn_get_audit_file.server_principal_name`  |  The database user for the client authentication.  | 
|  `endTime`  |  string  |  N/A  |  This field isn't used by Amazon RDS for SQL Server and the value is null.  | 
|  `engineNativeAuditFields`  |  object  |  Each field in `sys.fn_get_audit_file` that is not listed in this column.  |  By default, this object is empty. When you start the activity stream with the `--engine-native-audit-fields-included` option, this object includes other native engine audit fields, which are not returned by this JSON map.  | 
|  `errorMessage`  |  string  |  N/A  |  This field isn't used by Amazon RDS for SQL Server and the value is null.  | 
|  `exitCode`  |  integer  |  `sys.fn_get_audit_file.succeeded`  |  Indicates whether the action that started the event succeeded. This field can't be null. For all the events except login events, this field reports whether the permission check succeeded or failed, but not whether the operation succeeded or failed. Values include: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/DBActivityStreams.AuditLog.databaseActivityEventList.html)  | 
|  `logTime`  |  string  |  `sys.fn_get_audit_file.event_time`  |  The event timestamp that is recorded by the SQL Server.  | 
|  `netProtocol`  |  string  |  N/A  |  This field isn't used by Amazon RDS for SQL Server and the value is null.  | 
|  `objectName`  |  string  |  `sys.fn_get_audit_file.object_name`  |  The name of the database object if the SQL statement is operating on an object.  | 
|  `objectType`  |  string  |  `sys.fn_get_audit_file.class_type` mapped to `sys.dm_audit_class_type_map.class_type_desc`  |  The database object type if the SQL statement is operating on an object type.  | 
|  `paramList`  |  string  |  N/A  |  This field isn't used by Amazon RDS for SQL Server and the value is null.  | 
|  `pid`  |  integer  |  N/A  |  This field isn't used by Amazon RDS for SQL Server and the value is null.  | 
|  `remoteHost`  |  string  |  `sys.fn_get_audit_file.client_ip`  |  The IP address or hostname of the client that issued the SQL statement (SQL Server version 14 and higher). This field is null in SQL Server version 13.  | 
|  `remotePort`  |  integer  |  N/A  |  This field isn't used by Amazon RDS for SQL Server and the value is null.  | 
|  `rowCount`  |  integer  |  `sys.fn_get_audit_file.affected_rows`  |  The number of table rows affected by the SQL statement (SQL Server version 14 and higher). This field is in SQL Server version 13.  | 
|  `serverHost`  |  string  |  Database Host  |  The IP address of the host database server.  | 
|  `serverType`  |  string  |  N/A  |  The database server type. The value is `SQLSERVER`.  | 
|  `serverVersion`  |  string  |  Database Host  |  The database server version, for example, 15.00.4073.23.v1.R1 for SQL Server 2017.  | 
|  `serviceName`  |  string  |  Database Host  |  The name of the service. An example value is `sqlserver-ee`.  | 
|  `sessionId`  |  integer  |  `sys.fn_get_audit_file.session_id`  |  Unique identifier of the session.  | 
|  `startTime`  |  string  |  N/A  |  This field isn't used by Amazon RDS for SQL Server and the value is null.  | 
|  `statementId`  |  string  |  `sys.fn_get_audit_file.sequence_group_id`  |  A unique identifier for the client's SQL statement. The identifier is different for each event that is generated. A sample value is `0x38eaf4156267184094bb82071aaab644`.  | 
|  `substatementId`  |  integer  |  `sys.fn_get_audit_file.sequence_number`  |  An identifier to determine the sequence number for a statement. This identifier helps when large records are split into multiple records.  | 
|  `transactionId`  |  integer  |  `sys.fn_get_audit_file.transaction_id`  |  An identifier of a transaction. If there aren't any active transactions, the value is zero.  | 
|  `type`  |  string  |  Database activity stream generated  |  The type of event. The values are `record` or `heartbeat`.  | 

# Processing a database activity stream using the AWS SDK
<a name="DBActivityStreams.CodeExample"></a>

You can programmatically process an activity stream by using the AWS SDK. The following are fully functioning Java and Python examples of using Database Activity Streams records for instance based enablement.

------
#### [ Java ]

```
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.net.InetAddress;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.security.NoSuchAlgorithmException;
import java.security.NoSuchProviderException;
import java.security.Security;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.zip.GZIPInputStream;

import javax.crypto.Cipher;
import javax.crypto.NoSuchPaddingException;
import javax.crypto.spec.SecretKeySpec;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.encryptionsdk.AwsCrypto;
import com.amazonaws.encryptionsdk.CryptoInputStream;
import com.amazonaws.encryptionsdk.jce.JceMasterKey;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ThrottlingException;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownReason;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.Builder;
import com.amazonaws.services.kinesis.model.Record;
import com.amazonaws.services.kms.AWSKMS;
import com.amazonaws.services.kms.AWSKMSClientBuilder;
import com.amazonaws.services.kms.model.DecryptRequest;
import com.amazonaws.services.kms.model.DecryptResult;
import com.amazonaws.util.Base64;
import com.amazonaws.util.IOUtils;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import com.google.gson.annotations.SerializedName;
import org.bouncycastle.jce.provider.BouncyCastleProvider;

public class DemoConsumer {

    private static final String STREAM_NAME = "aws-rds-das-[instance-external-resource-id]"; // aws-rds-das-db-ABCD123456
    private static final String APPLICATION_NAME = "AnyApplication"; //unique application name for dynamo table generation that holds kinesis shard tracking
    private static final String AWS_ACCESS_KEY = "[AWS_ACCESS_KEY_TO_ACCESS_KINESIS]";
    private static final String AWS_SECRET_KEY = "[AWS_SECRET_KEY_TO_ACCESS_KINESIS]";
    private static final String RESOURCE_ID = "[external-resource-id]"; // db-ABCD123456
    private static final String REGION_NAME = "[region-name]"; //us-east-1, us-east-2...
    private static final BasicAWSCredentials CREDENTIALS = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);
    private static final AWSStaticCredentialsProvider CREDENTIALS_PROVIDER = new AWSStaticCredentialsProvider(CREDENTIALS);

    private static final AwsCrypto CRYPTO = new AwsCrypto();
    private static final AWSKMS KMS = AWSKMSClientBuilder.standard()
            .withRegion(REGION_NAME)
            .withCredentials(CREDENTIALS_PROVIDER).build();

    class Activity {
        String type;
        String version;
        String databaseActivityEvents;
        String key;
    }

    class ActivityEvent {
        @SerializedName("class") String _class;
        String clientApplication;
        String command;
        String commandText;
        String databaseName;
        String dbProtocol;
        String dbUserName;
        String endTime;
        String errorMessage;
        String exitCode;
        String logTime;
        String netProtocol;
        String objectName;
        String objectType;
        List<String> paramList;
        String pid;
        String remoteHost;
        String remotePort;
        String rowCount;
        String serverHost;
        String serverType;
        String serverVersion;
        String serviceName;
        String sessionId;
        String startTime;
        String statementId;
        String substatementId;
        String transactionId;
        String type;
    }

    class ActivityRecords {
        String type;
        String clusterId; // note that clusterId will contain an empty string on RDS Oracle and RDS SQL Server
        String instanceId;
        List<ActivityEvent> databaseActivityEventList;
    }

    static class RecordProcessorFactory implements IRecordProcessorFactory {
        @Override
        public IRecordProcessor createProcessor() {
            return new RecordProcessor();
        }
    }

    static class RecordProcessor implements IRecordProcessor {

        private static final long BACKOFF_TIME_IN_MILLIS = 3000L;
        private static final int PROCESSING_RETRIES_MAX = 10;
        private static final long CHECKPOINT_INTERVAL_MILLIS = 60000L;
        private static final Gson GSON = new GsonBuilder().serializeNulls().create();

        private static final Cipher CIPHER;
        static {
            Security.insertProviderAt(new BouncyCastleProvider(), 1);
            try {
                CIPHER = Cipher.getInstance("AES/GCM/NoPadding", "BC");
            } catch (NoSuchAlgorithmException | NoSuchPaddingException | NoSuchProviderException e) {
                throw new ExceptionInInitializerError(e);
            }
        }

        private long nextCheckpointTimeInMillis;

        @Override
        public void initialize(String shardId) {
        }

        @Override
        public void processRecords(final List<Record> records, final IRecordProcessorCheckpointer checkpointer) {
            for (final Record record : records) {
                processSingleBlob(record.getData());
            }

            if (System.currentTimeMillis() > nextCheckpointTimeInMillis) {
                checkpoint(checkpointer);
                nextCheckpointTimeInMillis = System.currentTimeMillis() + CHECKPOINT_INTERVAL_MILLIS;
            }
        }

        @Override
        public void shutdown(IRecordProcessorCheckpointer checkpointer, ShutdownReason reason) {
            if (reason == ShutdownReason.TERMINATE) {
                checkpoint(checkpointer);
            }
        }

        private void processSingleBlob(final ByteBuffer bytes) {
            try {
                // JSON $Activity
                final Activity activity = GSON.fromJson(new String(bytes.array(), StandardCharsets.UTF_8), Activity.class);

                // Base64.Decode
                final byte[] decoded = Base64.decode(activity.databaseActivityEvents);
                final byte[] decodedDataKey = Base64.decode(activity.key);

                Map<String, String> context = new HashMap<>();
                context.put("aws:rds:db-id", RESOURCE_ID);

                // Decrypt
                final DecryptRequest decryptRequest = new DecryptRequest()
                        .withCiphertextBlob(ByteBuffer.wrap(decodedDataKey)).withEncryptionContext(context);
                final DecryptResult decryptResult = KMS.decrypt(decryptRequest);
                final byte[] decrypted = decrypt(decoded, getByteArray(decryptResult.getPlaintext()));

                // GZip Decompress
                final byte[] decompressed = decompress(decrypted);
                // JSON $ActivityRecords
                final ActivityRecords activityRecords = GSON.fromJson(new String(decompressed, StandardCharsets.UTF_8), ActivityRecords.class);

                // Iterate throught $ActivityEvents
                for (final ActivityEvent event : activityRecords.databaseActivityEventList) {
                    System.out.println(GSON.toJson(event));
                }
            } catch (Exception e) {
                // Handle error.
                e.printStackTrace();
            }
        }

        private static byte[] decompress(final byte[] src) throws IOException {
            ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(src);
            GZIPInputStream gzipInputStream = new GZIPInputStream(byteArrayInputStream);
            return IOUtils.toByteArray(gzipInputStream);
        }

        private void checkpoint(IRecordProcessorCheckpointer checkpointer) {
            for (int i = 0; i < PROCESSING_RETRIES_MAX; i++) {
                try {
                    checkpointer.checkpoint();
                    break;
                } catch (ShutdownException se) {
                    // Ignore checkpoint if the processor instance has been shutdown (fail over).
                    System.out.println("Caught shutdown exception, skipping checkpoint." + se);
                    break;
                } catch (ThrottlingException e) {
                    // Backoff and re-attempt checkpoint upon transient failures
                    if (i >= (PROCESSING_RETRIES_MAX - 1)) {
                        System.out.println("Checkpoint failed after " + (i + 1) + "attempts." + e);
                        break;
                    } else {
                        System.out.println("Transient issue when checkpointing - attempt " + (i + 1) + " of " + PROCESSING_RETRIES_MAX + e);
                    }
                } catch (InvalidStateException e) {
                    // This indicates an issue with the DynamoDB table (check for table, provisioned IOPS).
                    System.out.println("Cannot save checkpoint to the DynamoDB table used by the Amazon Kinesis Client Library." + e);
                    break;
                }
                try {
                    Thread.sleep(BACKOFF_TIME_IN_MILLIS);
                } catch (InterruptedException e) {
                    System.out.println("Interrupted sleep" + e);
                }
            }
        }
    }

    private static byte[] decrypt(final byte[] decoded, final byte[] decodedDataKey) throws IOException {
        // Create a JCE master key provider using the random key and an AES-GCM encryption algorithm
        final JceMasterKey masterKey = JceMasterKey.getInstance(new SecretKeySpec(decodedDataKey, "AES"),
                "BC", "DataKey", "AES/GCM/NoPadding");
        try (final CryptoInputStream<JceMasterKey> decryptingStream = CRYPTO.createDecryptingStream(masterKey, new ByteArrayInputStream(decoded));
             final ByteArrayOutputStream out = new ByteArrayOutputStream()) {
            IOUtils.copy(decryptingStream, out);
            return out.toByteArray();
        }
    }

    public static void main(String[] args) throws Exception {
        final String workerId = InetAddress.getLocalHost().getCanonicalHostName() + ":" + UUID.randomUUID();
        final KinesisClientLibConfiguration kinesisClientLibConfiguration =
                new KinesisClientLibConfiguration(APPLICATION_NAME, STREAM_NAME, CREDENTIALS_PROVIDER, workerId);
        kinesisClientLibConfiguration.withInitialPositionInStream(InitialPositionInStream.LATEST);
        kinesisClientLibConfiguration.withRegionName(REGION_NAME);
        final Worker worker = new Builder()
                .recordProcessorFactory(new RecordProcessorFactory())
                .config(kinesisClientLibConfiguration)
                .build();

        System.out.printf("Running %s to process stream %s as worker %s...\n", APPLICATION_NAME, STREAM_NAME, workerId);

        try {
            worker.run();
        } catch (Throwable t) {
            System.err.println("Caught throwable while processing data.");
            t.printStackTrace();
            System.exit(1);
        }
        System.exit(0);
    }

    private static byte[] getByteArray(final ByteBuffer b) {
        byte[] byteArray = new byte[b.remaining()];
        b.get(byteArray);
        return byteArray;
    }
}
```

------
#### [ Python ]

```
import base64
import json
import zlib
import aws_encryption_sdk
from aws_encryption_sdk import CommitmentPolicy
from aws_encryption_sdk.internal.crypto import WrappingKey
from aws_encryption_sdk.key_providers.raw import RawMasterKeyProvider
from aws_encryption_sdk.identifiers import WrappingAlgorithm, EncryptionKeyType
import boto3

REGION_NAME = '<region>'                    # us-east-1
RESOURCE_ID = '<external-resource-id>'      # db-ABCD123456
STREAM_NAME = 'aws-rds-das-' + RESOURCE_ID  # aws-rds-das-db-ABCD123456

enc_client = aws_encryption_sdk.EncryptionSDKClient(commitment_policy=CommitmentPolicy.FORBID_ENCRYPT_ALLOW_DECRYPT)

class MyRawMasterKeyProvider(RawMasterKeyProvider):
    provider_id = "BC"

    def __new__(cls, *args, **kwargs):
        obj = super(RawMasterKeyProvider, cls).__new__(cls)
        return obj

    def __init__(self, plain_key):
        RawMasterKeyProvider.__init__(self)
        self.wrapping_key = WrappingKey(wrapping_algorithm=WrappingAlgorithm.AES_256_GCM_IV12_TAG16_NO_PADDING,
                                        wrapping_key=plain_key, wrapping_key_type=EncryptionKeyType.SYMMETRIC)

    def _get_raw_key(self, key_id):
        return self.wrapping_key


def decrypt_payload(payload, data_key):
    my_key_provider = MyRawMasterKeyProvider(data_key)
    my_key_provider.add_master_key("DataKey")
    decrypted_plaintext, header = enc_client.decrypt(
        source=payload,
        materials_manager=aws_encryption_sdk.materials_managers.default.DefaultCryptoMaterialsManager(master_key_provider=my_key_provider))
    return decrypted_plaintext


def decrypt_decompress(payload, key):
    decrypted = decrypt_payload(payload, key)
    return zlib.decompress(decrypted, zlib.MAX_WBITS + 16)


def main():
    session = boto3.session.Session()
    kms = session.client('kms', region_name=REGION_NAME)
    kinesis = session.client('kinesis', region_name=REGION_NAME)

    response = kinesis.describe_stream(StreamName=STREAM_NAME)
    shard_iters = []
    for shard in response['StreamDescription']['Shards']:
        shard_iter_response = kinesis.get_shard_iterator(StreamName=STREAM_NAME, ShardId=shard['ShardId'],
                                                         ShardIteratorType='LATEST')
        shard_iters.append(shard_iter_response['ShardIterator'])

    while len(shard_iters) > 0:
        next_shard_iters = []
        for shard_iter in shard_iters:
            response = kinesis.get_records(ShardIterator=shard_iter, Limit=10000)
            for record in response['Records']:
                record_data = record['Data']
                record_data = json.loads(record_data)
                payload_decoded = base64.b64decode(record_data['databaseActivityEvents'])
                data_key_decoded = base64.b64decode(record_data['key'])
                data_key_decrypt_result = kms.decrypt(CiphertextBlob=data_key_decoded,
                                                      EncryptionContext={'aws:rds:db-id': RESOURCE_ID})
                print (decrypt_decompress(payload_decoded, data_key_decrypt_result['Plaintext']))
            if 'NextShardIterator' in response:
                next_shard_iters.append(response['NextShardIterator'])
        shard_iters = next_shard_iters


if __name__ == '__main__':
    main()
```

------

# IAM policy examples for database activity streams
<a name="DBActivityStreams.ManagingAccess"></a>

Any user with appropriate AWS Identity and Access Management (IAM) role privileges for database activity streams can create, start, stop, and modify the activity stream settings for a DB instance. These actions are included in the audit log of the stream. For best compliance practices, we recommend that you don't provide these privileges to DBAs.

You set access to database activity streams using IAM policies. For more information about Amazon RDS authentication, see [Identity and access management for Amazon RDS](UsingWithRDS.IAM.md). For more information about creating IAM policies, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). 

**Example Policy to allow configuring database activity streams**  
To give users fine-grained access to modify activity streams, use the service-specific operation context keys `rds:StartActivityStream` and `rds:StopActivityStream` in an IAM policy. The following IAM policy example allows a user or role to configure activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ConfigureActivityStreams",
            "Effect": "Allow",
            "Action": [
                "rds:StartActivityStream",
                "rds:StopActivityStream"
            ],
            "Resource": "*"
        }
    ]
}
```

**Example Policy to allow starting database activity streams**  
The following IAM policy example allows a user or role to start activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"AllowStartActivityStreams",
            "Effect":"Allow",
            "Action":"rds:StartActivityStream",
            "Resource":"*"
        }
    ]
}
```

**Example Policy to allow stopping database activity streams**  
The following IAM policy example allows a user or role to stop activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"AllowStopActivityStreams",
            "Effect":"Allow",
            "Action":"rds:StopActivityStream",
            "Resource":"*"
        }
     ]
}
```

**Example Policy to deny starting database activity streams**  
The following IAM policy example prevents a user or role from starting activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"DenyStartActivityStreams",
            "Effect":"Deny",
            "Action":"rds:StartActivityStream",
            "Resource":"*"
        }
     ]
}
```

**Example Policy to deny stopping database activity streams**  
The following IAM policy example prevents a user or role from stopping activity streams.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"DenyStopActivityStreams",
            "Effect":"Deny",
            "Action":"rds:StopActivityStream",
            "Resource":"*"
        }
    ]
}
```

# Monitoring threats with Amazon GuardDuty RDS Protection
<a name="guard-duty-rds-protection"></a>

Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment.

GuardDuty RDS Protection analyzes and profiles login events for potential access threats to your Amazon RDS databases. When you turn on RDS Protection, GuardDuty consumes RDS login events from your RDS databases. RDS Protection monitors these events and profiles them for potential insider threats or external actors.

For more information about enabling GuardDuty RDS Protection, see [GuardDuty RDS Protection](https://docs.aws.amazon.com/guardduty/latest/ug/rds-protection.html) in the *Amazon GuardDuty User Guide*.

When RDS Protection detects a potential threat, such as an unusual pattern in successful or failed login attempts, GuardDuty generates a new finding with details about the potentially compromised database. You can view the finding details in the finding summary section in the Amazon GuardDuty console. The finding details vary based on the finding type. The primary details, resource type and resource role, determine the kind of information available for any finding. For more information about the commonly available details for findings and the finding types, see [Finding details](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings-summary.html) and [GuardDuty RDS Protection finding types](https://docs.aws.amazon.com/guardduty/latest/ug/findings-rds-protection.html) respectively in the *Amazon GuardDuty User Guide*. 

You can turn the RDS Protection feature on or off for any AWS account in any AWS Region where this feature is available. When RDS Protection isn't enabled, GuardDuty doesn't detect potentially compromised RDS databases or provide details of the compromise.

An existing GuardDuty account can enable RDS Protection with a 30-day trial period. For a new GuardDuty account, RDS Protection is already enabled and included in the 30-day free trial period. For more information, see [Estimating GuardDuty cost](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html) in the *Amazon GuardDuty User Guide*.

For information about the AWS Regions where GuardDuty doesn't yet support RDS Protection, see [Region-specific feature availability](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_regions.html#gd-regional-feature-availability) in the *Amazon GuardDuty User Guide*.

For information about the RDS database versions that GuardDuty RDS Protection supports, see [Supported Amazon Aurora, Amazon RDS, and Aurora Limitless databases](https://docs.aws.amazon.com/guardduty/latest/ug/rds-protection.html#rds-pro-supported-db) in the *Amazon GuardDuty User Guide*.