

# Monitoring Amazon Aurora log files
<a name="USER_LogAccess"></a>

Every RDS database engine generates logs that you can access for auditing and troubleshooting. The type of logs depends on your database engine.

You can access database logs for DB instances using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the Amazon RDS API. You can't view, watch, or download transaction logs.

**Note**  
In some cases, logs contain hidden data. Therefore, the AWS Management Console might show content in a log file, but the log file might be empty when you download it.

**Topics**
+ [

# Viewing and listing database log files
](USER_LogAccess.Procedural.Viewing.md)
+ [

# Downloading a database log file
](USER_LogAccess.Procedural.Downloading.md)
+ [

# Watching a database log file
](USER_LogAccess.Procedural.Watching.md)
+ [

# Publishing database logs to Amazon CloudWatch Logs
](USER_LogAccess.Procedural.UploadtoCloudWatch.md)
+ [

# Reading log file contents using REST
](DownloadCompleteDBLogFile.md)
+ [

# AuroraMySQL database log files
](USER_LogAccess.Concepts.MySQL.md)
+ [

# Aurora PostgreSQL database log files
](USER_LogAccess.Concepts.PostgreSQL.md)

# Viewing and listing database log files
<a name="USER_LogAccess.Procedural.Viewing"></a>

You can view database log files for your Amazon Aurora DB engine by using the AWS Management Console. You can list what log files are available for download or monitoring by using the AWS CLI or Amazon RDS API. 

**Note**  
You can't view the log files for Aurora Serverless v1 DB clusters in the RDS console. However, you can view them in the Amazon CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

## Console
<a name="USER_LogAccess.CON"></a>

**To view a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.

1. Scroll down to the **Logs** section.

1. (Optional) Enter a search term to filter your results.

   The following example lists logs filtered by the text **error**.  
![\[List DB logs\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ListEventsAMS.png)

1. Choose the log that you want to view, and then choose **View**.

## AWS CLI
<a name="USER_LogAccess.CLI"></a>

To list the available database log files for a DB instance, use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-log-files.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-log-files.html) command.

The following example returns a list of log files for a DB instance named `my-db-instance`.

**Example**  

```
1. aws rds describe-db-log-files --db-instance-identifier my-db-instance
```

## RDS API
<a name="USER_LogAccess.API"></a>

To list the available database log files for a DB instance, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBLogFiles.html) action.

# Downloading a database log file
<a name="USER_LogAccess.Procedural.Downloading"></a>

You can use the AWS Management Console, AWS CLI, or API to download a database log file. 

## Console
<a name="USER_LogAccess.Procedural.Downloading.CON"></a>

**To download a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.

1. Scroll down to the **Logs** section. 

1. In the **Logs** section, choose the button next to the log that you want to download, and then choose **Download**.

1. Open the context (right-click) menu for the link provided, and then choose **Save Link As**. Enter the location where you want the log file to be saved, and then choose **Save**.  
![\[viewing log file\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/log_download2.png)

## AWS CLI
<a name="USER_LogAccess.Procedural.Downloading.CLI"></a>

To download a database log file, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html](https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html). By default, this command downloads only the latest portion of a log file. However, you can download an entire file by specifying the parameter `--starting-token 0`.

The following example shows how to download the entire contents of a log file called *log/ERROR.4* and store it in a local file called *errorlog.txt*.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds download-db-log-file-portion \
2.     --db-instance-identifier myexampledb \
3.     --starting-token 0 --output text \
4.     --log-file-name log/ERROR.4 > errorlog.txt
```
For Windows:  

```
1. aws rds download-db-log-file-portion ^
2.     --db-instance-identifier myexampledb ^
3.     --starting-token 0 --output text ^
4.     --log-file-name log/ERROR.4 > errorlog.txt
```

## RDS API
<a name="USER_LogAccess.Procedural.Downloading.API"></a>

To download a database log file, use the Amazon RDS API [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DownloadDBLogFilePortion.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DownloadDBLogFilePortion.html) action.

# Watching a database log file
<a name="USER_LogAccess.Procedural.Watching"></a>

Watching a database log file is equivalent to tailing the file on a UNIX or Linux system. You can watch a log file by using the AWS Management Console. RDS refreshes the tail of the log every 5 seconds.

**To watch a database log file**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that has the log file that you want to view.

1. Choose the **Logs & events** tab.  
![\[Choose the Logs & events tab\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/Monitoring_logsEvents.png)

1. In the **Logs** section, choose a log file, and then choose **Watch**.  
![\[Choose a log\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/Monitoring_LogsEvents_watch.png)

   RDS shows the tail of the log, as in the following MySQL example.  
![\[Tail of a log file\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/Monitoring_LogsEvents_watch_content.png)

# Publishing database logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.Procedural.UploadtoCloudWatch"></a>

In an on-premises database, the database logs reside on the file system. Amazon RDS doesn't provide host access to the database logs on the file system of your DB cluster. For this reason, Amazon RDS lets you export database logs to [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html). With CloudWatch Logs, you can perform real-time analysis of the log data. You can also store the data in highly durable storage and manage the data with the CloudWatch Logs Agent. 

**Topics**
+ [

## Overview of RDS integration with CloudWatch Logs
](#rds-integration-cw-logs)
+ [

## Deciding which logs to publish to CloudWatch Logs
](#engine-specific-logs)
+ [

## Specifying the logs to publish to CloudWatch Logs
](#integrating_cloudwatchlogs.configure)
+ [

## Searching and filtering your logs in CloudWatch Logs
](#accessing-logs-in-cloudwatch)

## Overview of RDS integration with CloudWatch Logs
<a name="rds-integration-cw-logs"></a>

In CloudWatch Logs, a *log stream* is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. A *log group* is a group of log streams that share the same retention, monitoring, and access control settings.

Amazon Aurora continuously streams your DB cluster log records to a log group. For example, you have a log group `/aws/rds/cluster/cluster_name/log_type` for each type of log that you publish. This log group is in the same AWS Region as the database instance that generates the log.

AWS retains log data published to CloudWatch Logs for an indefinite time period unless you specify a retention period. For more information, see [Change log data retention in CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention). 

## Deciding which logs to publish to CloudWatch Logs
<a name="engine-specific-logs"></a>

Each RDS database engine supports its own set of logs. To learn about the options for your database engine, review the following topics:
+ [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md)
+ [Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs](AuroraPostgreSQL.CloudWatch.md)

## Specifying the logs to publish to CloudWatch Logs
<a name="integrating_cloudwatchlogs.configure"></a>

You specify which logs to publish in the console. Make sure that you have a service-linked role in AWS Identity and Access Management (IAM). For more information about service-linked roles, see [Using service-linked roles for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md).

**To specify the logs to publish**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Do either of the following:
   + Choose **Create database**.
   + Choose a database from the list, and then choose **Modify**.

1. In **Logs exports**, choose which logs to publish.

   The following example specifies the audit log, error logs, general log, instance log, IAM database authentication error log, and slow query log for an Aurora MySQL DB cluster.

## Searching and filtering your logs in CloudWatch Logs
<a name="accessing-logs-in-cloudwatch"></a>

You can search for log entries that meet a specified criteria using the CloudWatch Logs console. You can access the logs either through the RDS console, which leads you to the CloudWatch Logs console, or from the CloudWatch Logs console directly.

**To search your RDS logs using the RDS console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose a DB cluster or a DB instance.

1. Choose **Configuration**.

1. Under **Published logs**, choose the database log that you want to view.

**To search your RDS logs using the CloudWatch Logs console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Log groups**.

1. In the filter box, enter **/aws/rds**.

1. For **Log Groups**, choose the name of the log group containing the log stream to search.

1. For **Log Streams**, choose the name of the log stream to search.

1. Under **Log events**, enter the filter syntax to use.

For more information, see [Searching and filtering log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html) in the *Amazon CloudWatch Logs User Guide*. For a blog tutorial explaining how to monitor RDS logs, see [Build proactive database monitoring for Amazon RDS with Amazon CloudWatch Logs, AWS Lambda, and Amazon SNS](https://aws.amazon.com/blogs/database/build-proactive-database-monitoring-for-amazon-rds-with-amazon-cloudwatch-logs-aws-lambda-and-amazon-sns/).

# Reading log file contents using REST
<a name="DownloadCompleteDBLogFile"></a>

Amazon RDS provides a REST endpoint that allows access to DB instance log files. This is useful if you need to write an application to stream Amazon RDS log file contents.

The syntax is:

```
GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1
Content-type: application/json
host: rds.region.amazonaws.com
```

The following parameters are required:
+ `DBInstanceIdentifier`—the name of the DB instance that contains the log file you want to download.
+ `LogFileName`—the name of the log file to be downloaded.

The response contains the contents of the requested log file, as a stream.

The following example downloads the log file named *log/ERROR.6* for the DB instance named *sample-sql* in the *us-west-2* region.

```
GET /v13/downloadCompleteLogFile/sample-sql/log/ERROR.6 HTTP/1.1
host: rds.us-west-2.amazonaws.com
X-Amz-Security-Token: AQoDYXdzEIH//////////wEa0AIXLhngC5zp9CyB1R6abwKrXHVR5efnAVN3XvR7IwqKYalFSn6UyJuEFTft9nObglx4QJ+GXV9cpACkETq=
X-Amz-Date: 20140903T233749Z
X-Amz-Algorithm: AWS4-HMAC-SHA256
X-Amz-Credential: AKIADQKE4SARGYLE/20140903/us-west-2/rds/aws4_request
X-Amz-SignedHeaders: host
X-Amz-Content-SHA256: e3b0c44298fc1c229afbf4c8996fb92427ae41e4649b934de495991b7852b855
X-Amz-Expires: 86400
X-Amz-Signature: 353a4f14b3f250142d9afc34f9f9948154d46ce7d4ec091d0cdabbcf8b40c558
```

If you specify a nonexistent DB instance, the response consists of the following error:
+ `DBInstanceNotFound`—`DBInstanceIdentifier` does not refer to an existing DB instance. (HTTP status code: 404)

# AuroraMySQL database log files
<a name="USER_LogAccess.Concepts.MySQL"></a>

You can monitor the Aurora MySQL logs directly through the Amazon RDS console, Amazon RDS API, AWS CLI, or AWS SDKs. You can also access MySQL logs by directing the logs to a database table in the main database and querying that table. You can use the mysqlbinlog utility to download a binary log. 

For more information about viewing, downloading, and watching file-based database logs, see [Monitoring Amazon Aurora log files](USER_LogAccess.md).

**Topics**
+ [

# Overview of Aurora MySQL database logs
](USER_LogAccess.MySQL.LogFileSize.md)
+ [

# Sending AuroraMySQL log output to tables
](Appendix.MySQL.CommonDBATasks.Logs.md)
+ [

# Configuring Aurora MySQL binary logging for Single-AZ databases
](USER_LogAccess.MySQL.BinaryFormat.md)
+ [

# Accessing MySQL binary logs
](USER_LogAccess.MySQL.Binarylog.md)

# Overview of Aurora MySQL database logs
<a name="USER_LogAccess.MySQL.LogFileSize"></a>

You can monitor the following types of Aurora MySQL log files:
+ Error log
+ Slow query log
+ General log
+ Audit log
+ Instance log
+ IAM database authentication error log

The Aurora MySQL error log is generated by default. You can generate the slow query and general logs by setting parameters in your DB parameter group.

**Topics**
+ [

## Aurora MySQL error logs
](#USER_LogAccess.MySQL.Errorlog)
+ [

## Aurora MySQL slow query and general logs
](#USER_LogAccess.MySQL.Generallog)
+ [

## Aurora MySQL audit log
](#ams-audit-log)
+ [

## Aurora MySQL instance log
](#ams-instance-log)
+ [

## Log rotation and retention for Aurora MySQL
](#USER_LogAccess.AMS.LogFileSize.retention)
+ [

## Publishing Aurora MySQL logs to Amazon CloudWatch Logs
](#USER_LogAccess.MySQLDB.PublishAuroraMySQLtoCloudWatchLogs)

## Aurora MySQL error logs
<a name="USER_LogAccess.MySQL.Errorlog"></a>

Aurora MySQL writes errors in the `mysql-error.log` file. Each log file has the hour it was generated (in UTC) appended to its name. The log files also have a timestamp that helps you determine when the log entries were written.

Aurora MySQL writes to the error log only on startup, shutdown, and when it encounters errors. A DB instance can go hours or days without new entries being written to the error log. If you see no recent entries, it's because the server didn't encounter an error that would result in a log entry.

By design, the error logs are filtered so that only unexpected events such as errors are shown. However, the error logs also contain some additional database information, for example query progress, which isn't shown. Therefore, even without any actual errors the size of the error logs might increase because of ongoing database activities. And while you might see a certain size in bytes or kilobytes for the error logs in the AWS Management Console, they might have 0 bytes when you download them.

Aurora MySQL writes `mysql-error.log` to disk every 5 minutes. It appends the contents of the log to `mysql-error-running.log`.

Aurora MySQL rotates the `mysql-error-running.log` file every hour.

**Note**  
The log retention period is different between Amazon RDS and Aurora.

## Aurora MySQL slow query and general logs
<a name="USER_LogAccess.MySQL.Generallog"></a>

You can write the Aurora MySQL slow query log and the general log to a file or a database table. To do so, set parameters in your DB parameter group. For information about creating and modifying a DB parameter group, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). You must set these parameters before you can view the slow query log or general log in the Amazon RDS console or by using the Amazon RDS API, Amazon RDS CLI, or AWS SDKs.

You can control Aurora MySQL logging by using the parameters in this list:
+ `slow_query_log`: To create the slow query log, set to 1. The default is 0.
+ `general_log`: To create the general log, set to 1. The default is 0.
+ `long_query_time`: To prevent fast-running queries from being logged in the slow query log, specify a value for the shortest query runtime to be logged, in seconds. The default is 10 seconds; the minimum is 0. If log\$1output = FILE, you can specify a floating point value that goes to microsecond resolution. If log\$1output = TABLE, you must specify an integer value with second resolution. Only queries whose runtime exceeds the `long_query_time` value are logged. For example, setting `long_query_time` to 0.1 prevents any query that runs for less than 100 milliseconds from being logged.
+ `log_queries_not_using_indexes`: To log all queries that do not use an index to the slow query log, set to 1. Queries that don't use an index are logged even if their runtime is less than the value of the `long_query_time` parameter. The default is 0.
+ `log_output option`: You can specify one of the following options for the `log_output` parameter. 
  + **TABLE** – Write general queries to the `mysql.general_log` table, and slow queries to the `mysql.slow_log` table.
  + **FILE** – Write both general and slow query logs to the file system.
  + **NONE** – Disable logging.

  For Aurora MySQL versions 2 and 3, the default for `log_output` is `FILE`.

For slow query data to appear in Amazon CloudWatch Logs, the following conditions must be met:
+ CloudWatch Logs must be configured to include slow query logs.
+ `slow_query_log` must be enabled.
+ `log_output` must be set to `FILE`.
+ The query must take longer than the time configured for `long_query_time`.

For more information about the slow query and general logs, go to the following topics in the MySQL documentation:
+ [The slow query log](https://dev.mysql.com/doc/refman/8.0/en/slow-query-log.html)
+ [The general query log](https://dev.mysql.com/doc/refman/8.0/en/query-log.html)

## Aurora MySQL audit log
<a name="ams-audit-log"></a>

Audit logging for Aurora MySQL is called Advanced Auditing. To turn on Advanced Auditing, you set certain DB cluster parameters. For more information, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).

## Aurora MySQL instance log
<a name="ams-instance-log"></a>

Aurora creates a separate log file for DB instances that have auto-pause enabled. This instance.log file records any reasons why these DB instances couldn't be paused when expected. For more information on instance log file behavior and Aurora auto-pause capability, see [ Monitoring Aurora Serverless v2 pause and resume activity](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#autopause-logging-instance-log).

## Log rotation and retention for Aurora MySQL
<a name="USER_LogAccess.AMS.LogFileSize.retention"></a>

When logging is enabled, Amazon Aurora rotates or deletes log files at regular intervals. This measure is a precaution to reduce the possibility of a large log file either blocking database use or affecting performance. Aurora MySQL handles rotation and deletion as follows:
+ The Aurora MySQL error log file sizes are constrained to no more than 15 percent of the local storage for a DB instance. To maintain this threshold, logs are automatically rotated every hour. Aurora MySQL removes logs after 30 days or when 15% of disk space is reached. If the combined log file size exceeds the threshold after removing old log files, then the oldest log files are deleted until the log file size no longer exceeds the threshold.
+ Aurora MySQL removes the audit, general, and slow query logs after either 24 hours or when 15% of storage has been consumed.
+ When `FILE` logging is enabled, general log and slow query log files are examined every hour and log files more than 24 hours old are deleted. In some cases, the remaining combined log file size after the deletion might exceed the threshold of 15 percent of a DB instance's local space. In these cases, the oldest log files are deleted until the log file size no longer exceeds the threshold.
+ When `TABLE` logging is enabled, log tables aren't rotated or deleted. Log tables are truncated when the size of all logs combined is too large. You can subscribe to the `low storage` event category to be notified when log tables should be manually rotated or deleted to free up space. For more information, see [Working with Amazon RDS event notification](USER_Events.md).

  You can rotate the `mysql.general_log` table manually by calling the `mysql.rds_rotate_general_log` procedure. You can rotate the `mysql.slow_log` table by calling the `mysql.rds_rotate_slow_log` procedure.

  When you rotate log tables manually, the current log table is copied to a backup log table and the entries in the current log table are removed. If the backup log table already exists, then it is deleted before the current log table is copied to the backup. You can query the backup log table if needed. The backup log table for the `mysql.general_log` table is named `mysql.general_log_backup`. The backup log table for the `mysql.slow_log` table is named `mysql.slow_log_backup`.
+ The Aurora MySQL audit logs are rotated when the file size reaches 100 MB, and removed after 24 hours.
+ Amazon RDS rotates IAM database authentication error log files larger than 10 MB. Amazon RDS removes IAM database authentication error log files that are older than five days or larger than 100 MB.

To work with the logs from the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or AWS SDKs, set the `log_output` parameter to FILE. Like the Aurora MySQL error log, these log files are rotated hourly. The log files that were generated during the previous 24 hours are retained. Note that the retention period is different between Amazon RDS and Aurora.

## Publishing Aurora MySQL logs to Amazon CloudWatch Logs
<a name="USER_LogAccess.MySQLDB.PublishAuroraMySQLtoCloudWatchLogs"></a>

You can configure your Aurora MySQL DB cluster to publish log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).

# Sending AuroraMySQL log output to tables
<a name="Appendix.MySQL.CommonDBATasks.Logs"></a>

You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter group and setting the `log_output` server parameter to `TABLE`. General queries are then logged to the `mysql.general_log` table, and slow queries are logged to the `mysql.slow_log` table. You can query the tables to access the log information. Enabling this logging increases the amount of data written to the database, which can degrade performance.

Both the general log and the slow query logs are disabled by default. In order to enable logging to tables, you must also set the `general_log` and `slow_query_log` server parameters to `1`.

Log tables keep growing until the respective logging activities are turned off by resetting the appropriate parameter to `0`. A large amount of data often accumulates over time, which can use up a considerable percentage of your allocated storage space. Amazon Aurora doesn't allow you to truncate the log tables, but you can move their contents. Rotating a table saves its contents to a backup table and then creates a new empty log table. You can manually rotate the log tables with the following command line procedures, where the command prompt is indicated by `PROMPT>`: 

```
PROMPT> CALL mysql.rds_rotate_slow_log;
PROMPT> CALL mysql.rds_rotate_general_log;
```

To completely remove the old data and reclaim the disk space, call the appropriate procedure twice in succession. 

# Configuring Aurora MySQL binary logging for Single-AZ databases
<a name="USER_LogAccess.MySQL.BinaryFormat"></a>

The *binary log* is a set of log files that contain information about data modifications made to an Aurora MySQL server instance. The binary log contains information such as the following:
+ Events that describe database changes such as table creation or row modifications
+ Information about the duration of each statement that updated data
+ Events for statements that could have updated data but didn't

The binary log records statements that are sent during replication. It is also required for some recovery operations. For more information, see [The Binary Log](https://dev.mysql.com/doc/refman/8.0/en/binary-log.html) in the MySQL documentation.

Binary logs are accessible only from the primary DB instance, not from the replicas.

MySQL on Amazon Aurora supports the *row-based*, *statement-based*, and *mixed* binary logging formats. We recommend mixed unless you need a specific binlog format. For details on the different Aurora MySQL binary log formats, see [Binary Logging Formats](https://dev.mysql.com/doc/refman/8.0/en/binary-log-formats.html) in the MySQL documentation.

If you plan to use replication, the binary logging format is important because it determines the record of data changes that is recorded in the source and sent to the replication targets. For information about the advantages and disadvantages of different binary logging formats for replication, see [Advantages and Disadvantages of Statement-Based and Row-Based Replication](https://dev.mysql.com/doc/refman/8.0/en/replication-sbr-rbr.html) in the MySQL documentation.

**Important**  
With MySQL 8.0.34, MySQL deprecated the `binlog_format` parameter. In later MySQL versions, MySQL plans to remove the parameter and only support row-based replication. As a result, we recommend using row-based logging for new MySQL replication setups. For more information, see [binlog\$1format](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_binlog_format) in the MySQL documentation.  
MySQL versions 8.0 and 8.4 accept the parameter `binlog_format`. When using this parameter, MySQL issues a deprecation warning. In a future major release, MySQL will remove the parameter `binlog_format`.  
Statement-based replication can cause inconsistencies between the source DB cluster and a read replica. For more information, see [Determination of Safe and Unsafe Statements in Binary Logging](https://dev.mysql.com/doc/refman/8.0/en/replication-rbr-safe-unsafe.html) in the MySQL documentation.  
Enabling binary logging increases the number of write disk I/O operations to the DB cluster. You can monitor IOPS usage with the ```VolumeWriteIOPs` CloudWatch metric.

**To set the MySQL binary logging format**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the DB cluster parameter group, associated with the DB cluster, that you want to modify.

   You can't modify a default parameter group. If the DB cluster is using a default parameter group, create a new parameter group and associate it with the DB cluster.

   For more information on parameter groups, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

1. From **Actions**, choose **Edit**.

1. Set the `binlog_format` parameter to the binary logging format of your choice (`ROW`, `STATEMENT`, or `MIXED`). You can also use the value `OFF` to turn off binary logging.
**Note**  
Setting `binlog_format` to `OFF` in the DB cluster parameter group disables the `log_bin` session variable. This disables binary logging on the Aurora MySQL DB cluster, which in turn resets the `binlog_format` session variable to the default value of `ROW` in the database.

1. Choose **Save changes** to save the updates to the DB cluster parameter group.

After you perform these steps, you must reboot the writer instance in the DB cluster for your changes to apply. In Aurora MySQL version 2.09 and lower, when you reboot the writer instance, all of the reader instances in the DB cluster are also rebooted. In Aurora MySQL version 2.10 and higher, you must reboot all of the reader instances manually. For more information, see [Rebooting an Amazon Aurora DB cluster or Amazon Aurora DB instance](USER_RebootCluster.md).

**Important**  
Changing a DB cluster parameter group affects all DB clusters that use that parameter group. If you want to specify different binary logging formats for different Aurora MySQL DB clusters in an AWS Region, the DB clusters must use different DB cluster parameter groups. These parameter groups identify different logging formats. Assign the appropriate DB cluster parameter group to each DB clusters. For more information about Aurora MySQL parameters, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md).

# Accessing MySQL binary logs
<a name="USER_LogAccess.MySQL.Binarylog"></a>

You can use the mysqlbinlog utility to download or stream binary logs from RDS for MySQL DB instances. The binary log is downloaded to your local computer, where you can perform actions such as replaying the log using the mysql utility. For more information about using the mysqlbinlog utility, see [Using mysqlbinlog to back up binary log files](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog-backup.html) in the MySQL documentation.

To run the mysqlbinlog utility against an Amazon RDS instance, use the following options:
+ `--read-from-remote-server` – Required.
+ `--host` – The DNS name from the endpoint of the instance.
+ `--port` – The port used by the instance.
+ `--user` – A MySQL user that has been granted the `REPLICATION SLAVE` permission.
+ `--password` – The password for the MySQL user, or omit a password value so that the utility prompts you for a password.
+ `--raw` – Download the file in binary format.
+ `--result-file` – The local file to receive the raw output.
+ `--stop-never` – Stream the binary log files.
+ `--verbose` – When you use the `ROW` binlog format, include this option to see the row events as pseudo-SQL statements. For more information on the `--verbose` option, see [mysqlbinlog row event display](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog-row-events.html) in the MySQL documentation.
+ Specify the names of one or more binary log files. To get a list of the available logs, use the SQL command `SHOW BINARY LOGS`.

For more information about mysqlbinlog options, see [mysqlbinlog — Utility for processing binary log files](https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog.html) in the MySQL documentation.

The following examples show how to use the mysqlbinlog utility.

For Linux, macOS, or Unix:

```
mysqlbinlog \
    --read-from-remote-server \
    --host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com \
    --port=3306  \
    --user ReplUser \
    --password \
    --raw \
    --verbose \
    --result-file=/tmp/ \
    binlog.00098
```

For Windows:

```
mysqlbinlog ^
    --read-from-remote-server ^
    --host=MySQLInstance1.cg034hpkmmjt.region.rds.amazonaws.com ^
    --port=3306  ^
    --user ReplUser ^
    --password ^
    --raw ^
    --verbose ^
    --result-file=/tmp/ ^
    binlog.00098
```

Binary logs must remain available on the DB instance for the mysqlbinlog utility to access them. To ensure their availability, use the [mysql.rds\$1set\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_set_configuration) stored procedure and specify a period with enough time for you to download the logs. If this configuration isn't set, Amazon RDS purges the binary logs as soon as possible, leading to gaps in the binary logs that the mysqlbinlog utility retrieves. 

The following example sets the retention period to 1 day.

```
call mysql.rds_set_configuration('binlog retention hours', 24);
```

To display the current setting, use the [mysql.rds\$1show\$1configuration](mysql-stored-proc-configuring.md#mysql_rds_show_configuration) stored procedure.

```
call mysql.rds_show_configuration;
```

# Aurora PostgreSQL database log files
<a name="USER_LogAccess.Concepts.PostgreSQL"></a>

You can monitor the following types of Aurora PostgreSQL log files:
+ PostgreSQL log
+ Instance log
+ IAM database authentication error log
**Note**  
To enable IAM database authentication error logs, you must first enable IAM database authentication for your Aurora PostgreSQL DB cluster. For more information about enabling IAM database authentication, see [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md).

Aurora PostgreSQL logs database activities to the default PostgreSQL log file. For an on-premises PostgreSQL DB instance, these messages are stored locally in `log/postgresql.log`. For an Aurora PostgreSQL DB cluster, the log file is available on the Aurora cluster. These logs are also accessible via the AWS Management Console, where you can view or download them. The default logging level captures login failures, fatal server errors, deadlocks, and query failures.

For more information about how you can view, download, and watch file-based database logs, see [Monitoring Amazon Aurora log files](USER_LogAccess.md). To learn more about PostgreSQL logs, see [Working with Amazon RDS and Aurora PostgreSQL logs: Part 1](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/) and [ Working with Amazon RDS and Aurora PostgreSQL logs: Part 2](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-2/). 

In addition to the standard PostgreSQL logs discussed in this topic, Aurora PostgreSQL also supports the PostgreSQL Audit extension (`pgAudit`). Most regulated industries and government agencies need to maintain an audit log or audit trail of changes made to data to comply with legal requirements. For information about installing and using pgAudit, see [Using pgAudit to log database activity](Appendix.PostgreSQL.CommonDBATasks.pgaudit.md).

Aurora creates a separate log file for DB instances that have auto-pause enabled. This instance.log file records any reasons why these DB instances couldn't be paused when expected. For more information on instance log file behavior and Aurora auto-pause capability, see [ Monitoring Aurora Serverless v2 pause and resume activity](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2-administration.html#autopause-logging-instance-log).

**Topics**
+ [

# Parameters for logging in Aurora PostgreSQL
](USER_LogAccess.Concepts.PostgreSQL.overview.parameter-groups.md)
+ [

# Turning on query logging for your Aurora PostgreSQL DB cluster
](USER_LogAccess.Concepts.PostgreSQL.Query_Logging.md)

# Parameters for logging in Aurora PostgreSQL
<a name="USER_LogAccess.Concepts.PostgreSQL.overview.parameter-groups"></a>

You can customize the logging behavior for your Aurora PostgreSQL DB cluster by modifying various parameters. In the following table you can find the parameters that affect how long the logs are stored, when to rotate the log, and whether to output the log as a CSV (comma-separated value) format. You can also find the text output sent to STDERR, among other settings. To change settings for the parameters that are modifiable, use a custom DB cluster parameter group for your Aurora PostgreSQL DB cluster. For more information, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md). 


| Parameter | Default | Description | 
| --- | --- | --- | 
| log\$1destination | stderr | Sets the output format for the log. The default is `stderr` but you can also specify comma-separated value (CSV) by adding `csvlog` to the setting. For more information, see [Setting the log destination (`stderr`, `csvlog`)](#USER_LogAccess.Concepts.PostgreSQL.Log_Format).  | 
| log\$1filename | postgresql.log.%Y-%m-%d-%H%M  | Specifies the pattern for the log file name. In addition to the default, this parameter supports `postgresql.log.%Y-%m-%d` and `postgresql.log.%Y-%m-%d-%H` for the filename pattern. For Aurora PostgreSQL version 17.4 and later, you can't modify this parameter.  | 
| log\$1line\$1prefix | %t:%r:%u@%d:[%p]: | Defines the prefix for each log line that gets written to `stderr`, to note the time (%t), remote host (%r), user (%u), database (%d), and process ID (%p). | 
| log\$1rotation\$1age | 60 | Minutes after which log file is automatically rotated. You can change this value within the range of 1 and 1440 minutes. For more information, see [Setting log file rotation](#USER_LogAccess.Concepts.PostgreSQL.log_rotation).  | 
| log\$1rotation\$1size | – | The size (kB) at which the log is automatically rotated. You can change this value within the range of 50,000 to 1,000,000 kilobytes. To learn more, see [Setting log file rotation](#USER_LogAccess.Concepts.PostgreSQL.log_rotation). | 
| rds.log\$1retention\$1period | 4320 | PostgreSQL logs that are older than the specified number of minutes are deleted. The default value of 4320 minutes deletes log files after 3 days. For more information, see [Setting the log retention period](#USER_LogAccess.Concepts.PostgreSQL.log_retention_period). | 

To identify application issues, you can look for query failures, login failures, deadlocks, and fatal server errors in the log. For example, suppose that you converted a legacy application from Oracle to Aurora PostgreSQL, but not all queries converted correctly. These incorrectly formatted queries generate error messages that you can find in the logs to help identify problems. For more information about logging queries, see [Turning on query logging for your Aurora PostgreSQL DB cluster ](USER_LogAccess.Concepts.PostgreSQL.Query_Logging.md). 

In the following topics, you can find information about how to set various parameters that control the basic details for your PostgreSQL logs. 

**Topics**
+ [

## Setting the log retention period
](#USER_LogAccess.Concepts.PostgreSQL.log_retention_period)
+ [

## Setting log file rotation
](#USER_LogAccess.Concepts.PostgreSQL.log_rotation)
+ [

## Setting the log destination (`stderr`, `csvlog`)
](#USER_LogAccess.Concepts.PostgreSQL.Log_Format)
+ [

## Understanding the log\$1line\$1prefix parameter
](#USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix)

## Setting the log retention period
<a name="USER_LogAccess.Concepts.PostgreSQL.log_retention_period"></a>

The `rds.log_retention_period` parameter specifies how long your Aurora PostgreSQL DB cluster keeps its log files. The default setting is 3 days (4,320 minutes), but you can set this value to anywhere from 1 day (1,440 minutes) to 7 days (10,080 minutes). Be sure that your Aurora PostgreSQL DB cluster has sufficient storage to hold the log files for the period of time.

We recommend that you have your logs routinely published to Amazon CloudWatch Logs so that you can view and analyze system data long after the logs have been removed from your Aurora PostgreSQL DB cluster. For more information, see [Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs](AuroraPostgreSQL.CloudWatch.md). After you set up CloudWatch publishing, Aurora doesn't delete a log until after it's published to CloudWatch Logs.  

Amazon Aurora compresses older PostgreSQL logs when storage for the DB instance reaches a threshold. Aurora compresses the files using the gzip compression utility. For more information, see the [gzip](https://www.gzip.org) website.

When storage for the DB instance is low and all available logs are compressed, you get a warning such as the following:

```
Warning: local storage for PostgreSQL log files is critically low for 
this Aurora PostgreSQL instance, and could lead to a database outage.
```

If there's not enough storage, Aurora might delete compressed PostgreSQL logs before the end of a specified retention period. If that happens, you see a message similar to the following:

```
The oldest PostgreSQL log files were deleted due to local storage constraints.
```

## Setting log file rotation
<a name="USER_LogAccess.Concepts.PostgreSQL.log_rotation"></a>

Aurora creates new log files every hour by default. The timing is controlled by the `log_rotation_age` parameter. This parameter has a default value of 60 (minutes), but you can set it to anywhere from 1 minute to 24 hours (1,440 minutes). When it's time for rotation, a new distinct log file is created. The file is named according to the pattern specified by the `log_filename` parameter. 

Log files can also be rotated according to their size, as specified in the `log_rotation_size` parameter. This parameter specifies that the log should be rotated when it reaches the specified size (in kilobytes). The default `log_rotation_size` is 100000 kB (kilobytes) for an Aurora PostgreSQL DB cluster, but you can set this value to anywhere from 50,000 to 1,000,000 kilobytes. 

The log file names are based on the file name pattern specified in the `log_filename` parameter. The available settings for this parameter are as follows:
+ `postgresql.log.%Y-%m-%d` – Default format for the log file name. Includes the year, month, and date in the name of the log file.
+ `postgresql.log.%Y-%m-%d-%H` – Includes the hour in the log file name format.
+ `postgresql.log.%Y-%m-%d-%H%M` – Includes hour:minute in the log file name format.

If you set `log_rotation_age` parameter to less than 60 minutes, set the `log_filename` parameter to the minute format.

For more information, see [https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-AGE](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-AGE) and [https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-SIZE](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ROTATION-SIZE) in the PostgreSQL documentation.

## Setting the log destination (`stderr`, `csvlog`)
<a name="USER_LogAccess.Concepts.PostgreSQL.Log_Format"></a>

By default, Aurora PostgreSQL generates logs in standard error (stderr) format. This format is the default setting for the `log_destination` parameter. Each message is prefixed using the pattern specified in the `log_line_prefix` parameter. For more information, see [Understanding the log\$1line\$1prefix parameter](#USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix). 

Aurora PostgreSQL can also generate the logs in `csvlog` format. The `csvlog` is useful for analyzing the log data as comma-separated values (CSV) data. For example, suppose that you use the `log_fdw` extension to work with your logs as foreign tables. The foreign table created on `stderr` log files contains a single column with log event data. By adding `csvlog` to the `log_destination` parameter, you get the log file in the CSV format with demarcations for the multiple columns of the foreign table. You can now sort and analyze your logs more easily. 

If you specify `csvlog` for this parameter, be aware that both `stderr` and `csvlog` files are generated. Be sure to monitor the storage consumed by the logs, taking into account the `rds.log_retention_period` and other settings that affect log storage and turnover. Using `stderr` and `csvlog` more than doubles the storage consumed by the logs.

If you add `csvlog` to `log_destination` and you want to revert to the `stderr` alone, you need to reset the parameter. To do so, open the Amazon RDS Console and then open the custom DB cluster parameter group for your instance. Choose the `log_destination` parameter, choose **Edit parameter**, and then choose **Reset**. 

For more information about configuring logging, see [ Working with Amazon RDS and Aurora PostgreSQL logs: Part 1](https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/).

## Understanding the log\$1line\$1prefix parameter
<a name="USER_LogAccess.Concepts.PostgreSQL.Log_Format.log-line-prefix"></a>

The `stderr` log format prefixes each log message with the details specified by the `log_line_prefix` parameter. The default value is:

```
%t:%r:%u@%d:[%p]:t
```

Starting from Aurora PostgreSQL version 16, you can also choose:

```
%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a
```

Each log entry sent to stderr includes the following information based on the selected value:
+ `%t` – Time of log entry without milliseconds
+ `%m` – Time of log entry with milliseconds
+  `%r` – Remote host address
+  `%u@%d` – User name @ database name
+  `[%p]` – Process ID if available
+  `%l` – Log line number per session 
+  `%e` – SQL error code 
+  `%s` – Process start timestamp 
+  `%v` – Virtual transaction id 
+  `%x` – Transaction ID 
+  `%c` – Session ID 
+  `%q` – Non-session terminator 
+  `%a` – Application name 

# Turning on query logging for your Aurora PostgreSQL DB cluster
<a name="USER_LogAccess.Concepts.PostgreSQL.Query_Logging"></a>

You can collect more detailed information about your database activities, including queries, queries waiting for locks, checkpoints, and many other details by setting some of the parameters listed in the following table. This topic focuses on logging queries.


| Parameter | Default | Description | 
| --- | --- | --- | 
| log\$1connections | – | Logs each successful connection. To learn how to use this parameter with `log_disconnections` to detect connection churn, see [Managing Aurora PostgreSQL connection churn with pooling](AuroraPostgreSQL.BestPractices.connection_pooling.md).  | 
| log\$1disconnections | – | Logs the end of each session and its duration. To learn how to use this parameter with `log_connections` to detect connection churn, see [Managing Aurora PostgreSQL connection churn with pooling](AuroraPostgreSQL.BestPractices.connection_pooling.md). | 
| log\$1checkpoints | – |  Not applicable for Aurora PostgreSQL | 
| log\$1lock\$1waits | – | Logs long lock waits. By default, this parameter isn't set. | 
| log\$1min\$1duration\$1sample | – | (ms) Sets the minimum execution time above which a sample of statements is logged. Sample size is set using the log\$1statement\$1sample\$1rate parameter. | 
| log\$1min\$1duration\$1statement | – | Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default, this parameter isn't set. Turning on this parameter can help you find unoptimized queries. | 
| log\$1statement | – | Sets the type of statements logged. By default, this parameter isn't set, but you can change it to `all`, `ddl`, or `mod` to specify the types of SQL statements that you want logged. If you specify anything other than `none` for this parameter, you should also take additional steps to prevent the exposure of passwords in the log files. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk).  | 
| log\$1statement\$1sample\$1rate | – | The percentage of statements exceeding the time specified in `log_min_duration_sample` to be logged, expressed as a floating point value between 0.0 and 1.0.  | 
| log\$1statement\$1stats | – | Writes cumulative performance statistics to the server log. | 

## Using logging to find slow performing queries
<a name="USER_LogAccess.Concepts.PostgreSQL.Query_Logging.using"></a>

You can log SQL statements and queries to help find slow performing queries. You turn on this capability by modifying the settings in the `log_statement` and `log_min_duration` parameters as outlined in this section. Before turning on query logging for your Aurora PostgreSQL DB cluster, you should be aware of possible password exposure in the logs and how to mitigate the risks. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk). 

Following, you can find reference information about the `log_statement` and `log_min_duration` parameters.log\$1statement

This parameter specifies the type of SQL statements that should get sent to the log. The default value is `none`. If you change this parameter to `all`, `ddl`, or `mod`, be sure to apply recommended actions to mitigate the risk of exposing passwords in the logs. For more information, see [Mitigating risk of password exposure when using query loggingMitigating password exposure risk](#USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk). 

**all**  
Logs all statements. This setting is recommended for debugging purposes.

**ddl**  
Logs all data definition language (DDL) statements, such as CREATE, ALTER, DROP, and so on.

**mod**  
Logs all DDL statements and data manipulation language (DML) statements, such as INSERT, UPDATE, and DELETE, which modify the data.

**none**  
No SQL statements get logged. We recommend this setting to avoid the risk of exposing passwords in the logs.log\$1min\$1duration\$1statement

Any SQL statement that runs atleast for the specified amount of time or longer gets logged. By default, this parameter isn't set. Turning on this parameter can help you find unoptimized queries.

**–1–2147483647**  
The number of milliseconds (ms) of runtime over which a statement gets logged.

**To set up query logging**

These steps assume that your Aurora PostgreSQL DB cluster uses a custom DB cluster parameter group. 

1. Set the `log_statement` parameter to `all`. The following example shows the information that is written to the `postgresql.log` file with this parameter setting.

   ```
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: statement: SELECT feedback, s.sentiment,s.confidence
   FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
   ORDER BY s.confidence DESC;
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:LOG: QUERY STATISTICS
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:DETAIL: ! system usage stats:
   ! 0.017355 s user, 0.000000 s system, 0.168593 s elapsed
   ! [0.025146 s user, 0.000000 s system total]
   ! 36644 kB max resident size
   ! 0/8 [0/8] filesystem blocks in/out
   ! 0/733 [0/1364] page faults/reclaims, 0 [0] swaps
   ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
   ! 19/0 [27/0] voluntary/involuntary context switches
   2022-10-05 22:05:52 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: SELECT feedback, s.sentiment,s.confidence
   FROM support,aws_comprehend.detect_sentiment(feedback, 'en') s
   ORDER BY s.confidence DESC;
   2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:ERROR: syntax error at or near "ORDER" at character 1
   2022-10-05 22:05:56 UTC:52.95.4.1(11335):postgres@labdb:[3639]:STATEMENT: ORDER BY s.confidence DESC;
   ----------------------- END OF LOG ----------------------
   ```

1. Set the `log_min_duration_statement` parameter. The following example shows the information that is written to the `postgresql.log` file when the parameter is set to `1`.

   Queries that exceed the duration specified in the `log_min_duration_statement` parameter are logged. The following shows an example. You can view the log file for your Aurora PostgreSQL DB cluster in the Amazon RDS Console. 

   ```
   2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: statement: DROP table comments;
   2022-10-05 19:05:19 UTC:52.95.4.1(6461):postgres@labdb:[6144]:LOG: duration: 167.754 ms
   2022-10-05 19:08:07 UTC::@:[355]:LOG: checkpoint starting: time
   2022-10-05 19:08:08 UTC::@:[355]:LOG: checkpoint complete: wrote 11 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.013 s, sync=0.006 s, total=1.033 s; sync files=8, longest=0.004 s, average=0.001 s; distance=131028 kB, estimate=131028 kB
   ----------------------- END OF LOG ----------------------
   ```

### Mitigating risk of password exposure when using query logging
<a name="USER_LogAccess.Concepts.PostgreSQL.Query_Logging.mitigate-risk"></a>

We recommend that you keep `log_statement` set to `none` to avoid exposing passwords. If you set `log_statement` to `all`, `ddl`, or `mod`, we recommend that you take one or more of the following steps.
+ For the client, encrypt sensitive information. For more information, see [Encryption Options](https://www.postgresql.org/docs/current/encryption-options.html) in the PostgreSQL documentation. Use the `ENCRYPTED` (and `UNENCRYPTED`) options of the `CREATE` and `ALTER` statements. For more information, see [CREATE USER](https://www.postgresql.org/docs/current/sql-createuser.html) in the PostgreSQL documentation.
+ For your Aurora PostgreSQL DB cluster, set up and use the PostgreSQL Auditing (pgAudit) extension. This extension redacts sensitive information in CREATE and ALTER statements sent to the log. For more information, see [Using pgAudit to log database activity](Appendix.PostgreSQL.CommonDBATasks.pgaudit.md). 
+ Restrict access to the CloudWatch logs.
+ Use stronger authentication mechanisms such as IAM.

 