

# Managing Amazon Aurora MySQL
<a name="AuroraMySQL.Managing"></a>

The following sections discuss managing an Amazon Aurora MySQL DB cluster.

**Topics**
+ [Managing performance and scaling for Amazon Aurora MySQL](AuroraMySQL.Managing.Performance.md)
+ [Backtracking an Aurora DB cluster](AuroraMySQL.Managing.Backtrack.md)
+ [Testing Amazon Aurora MySQL using fault injection queries](AuroraMySQL.Managing.FaultInjectionQueries.md)
+ [Altering tables in Amazon Aurora using Fast DDL](AuroraMySQL.Managing.FastDDL.md)
+ [Displaying volume status for an Aurora MySQL DB cluster](AuroraMySQL.Managing.VolumeStatus.md)

# Managing performance and scaling for Amazon Aurora MySQL
<a name="AuroraMySQL.Managing.Performance"></a>

## Scaling Aurora MySQL DB instances
<a name="AuroraMySQL.Managing.Performance.InstanceScaling"></a>

You can scale Aurora MySQL DB instances in two ways, instance scaling and read scaling. For more information about read scaling, see [Read scaling](Aurora.Managing.Performance.md#Aurora.Managing.Performance.ReadScaling).

You can scale your Aurora MySQL DB cluster by modifying the DB instance class for each DB instance in the DB cluster. Aurora MySQL supports several DB instance classes optimized for Aurora. Don't use db.t2 or db.t3 instance classes for larger Aurora clusters of size greater than 40 TB. For the specifications of the DB instance classes supported by Aurora MySQL, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).

**Note**  
We recommend using the T DB instance classes only for development and test servers, or other non-production servers. For more details on the T instance classes, see [Using T instance classes for development and testing](AuroraMySQL.BestPractices.Performance.md#AuroraMySQL.BestPractices.T2Medium).

## Maximum connections to an Aurora MySQL DB instance
<a name="AuroraMySQL.Managing.MaxConnections"></a><a name="max_connections"></a>

The maximum number of connections allowed to an Aurora MySQL DB instance is determined by the `max_connections` parameter in the instance-level parameter group for the DB instance.

The following table lists the resulting default value of `max_connections` for each DB instance class available to Aurora MySQL. You can increase the maximum number of connections to your Aurora MySQL DB instance by scaling the instance up to a DB instance class with more memory, or by setting a larger value for the `max_connections` parameter in the DB parameter group for your instance, up to 16,000.

**Tip**  
If your applications frequently open and close connections, or keep a large number of long-lived connections open, we recommend that you use Amazon RDS Proxy. RDS Proxy is a fully managed, highly available database proxy that uses connection pooling to share database connections securely and efficiently. To learn more about RDS Proxy, see [Amazon RDS Proxyfor Aurora](rds-proxy.md).

 For details about how Aurora Serverless v2 instances handle this parameter, see [Maximum connections for Aurora Serverless v2](aurora-serverless-v2.setting-capacity.md#aurora-serverless-v2.max-connections). 


| Instance class | max\$1connections default value | 
| --- | --- | 
|  db.t2.small  |  45  | 
|  db.t2.medium  |  90  | 
|  db.t3.small  |  45  | 
|  db.t3.medium  |  90  | 
|  db.t3.large  |  135  | 
|  db.t4g.medium  |  90  | 
|  db.t4g.large  |  135  | 
|  db.r3.large  |  1000  | 
|  db.r3.xlarge  |  2000  | 
|  db.r3.2xlarge  |  3000  | 
|  db.r3.4xlarge  |  4000  | 
|  db.r3.8xlarge  |  5000  | 
|  db.r4.large  |  1000  | 
|  db.r4.xlarge  |  2000  | 
|  db.r4.2xlarge  |  3000  | 
|  db.r4.4xlarge  |  4000  | 
|  db.r4.8xlarge  |  5000  | 
|  db.r4.16xlarge  |  6000  | 
|  db.r5.large  |  1000  | 
|  db.r5.xlarge  |  2000  | 
|  db.r5.2xlarge  |  3000  | 
|  db.r5.4xlarge  |  4000  | 
|  db.r5.8xlarge  |  5000  | 
|  db.r5.12xlarge  |  6000  | 
|  db.r5.16xlarge  |  6000  | 
|  db.r5.24xlarge  |  7000  | 
| db.r6g.large | 1000 | 
| db.r6g.xlarge | 2000 | 
| db.r6g.2xlarge | 3000 | 
| db.r6g.4xlarge | 4000 | 
| db.r6g.8xlarge | 5000 | 
| db.r6g.12xlarge | 6000 | 
| db.r6g.16xlarge | 6000 | 
| db.r6i.large | 1000 | 
| db.r6i.xlarge | 2000 | 
| db.r6i.2xlarge | 3000 | 
| db.r6i.4xlarge | 4000 | 
| db.r6i.8xlarge | 5000 | 
| db.r6i.12xlarge | 6000 | 
| db.r6i.16xlarge | 6000 | 
| db.r6i.24xlarge | 7000 | 
| db.r6i.32xlarge | 7000 | 
| db.r7g.large | 1000 | 
| db.r7g.xlarge | 2000 | 
| db.r7g.2xlarge | 3000 | 
| db.r7g.4xlarge | 4000 | 
| db.r7g.8xlarge | 5000 | 
| db.r7g.12xlarge | 6000 | 
| db.r7g.16xlarge | 6000 | 
| db.r7i.large | 1000 | 
| db.r7i.xlarge | 2000 | 
| db.r7i.2xlarge | 3000 | 
| db.r7i.4xlarge | 4000 | 
| db.r7i.8xlarge | 5000 | 
| db.r7i.12xlarge | 6000 | 
| db.r7i.16xlarge | 6000 | 
| db.r7i.24xlarge | 7000 | 
| db.r7i.48xlarge | 8000 | 
| db.r8g.large | 1000 | 
| db.r8g.xlarge | 2000 | 
| db.r8g.2xlarge | 3000 | 
| db.r8g.4xlarge | 4000 | 
| db.r8g.8xlarge | 5000 | 
| db.r8g.12xlarge | 6000 | 
| db.r8g.16xlarge | 6000 | 
| db.r8g.24xlarge | 7000 | 
| db.r8g.48xlarge | 8000 | 
| db.x2g.large | 2000 | 
| db.x2g.xlarge | 3000 | 
| db.x2g.2xlarge | 4000 | 
| db.x2g.4xlarge | 5000 | 
| db.x2g.8xlarge | 6000 | 
| db.x2g.12xlarge | 7000 | 
| db.x2g.16xlarge | 7000 | 

**Tip**  
The `max_connections` parameter calculation uses log base 2 (distinct from natural logarithm) and the `DBInstanceClassMemory` value in bytes for the selected Aurora MySQL instance class. The parameter accepts only integer values, with decimal portions truncated from calculations. The formula implements connection limits as follows:  
1000 connection increment for larger R3, R4, and R5 instances
45 connection increment for T2 and T3 instance memory variants
Example: For db.r6g.large, while the formula calculates 1069.2, the system implements 1000 to maintain consistent incremental patterns.

If you create a new parameter group to customize your own default for the connection limit, you'll see that the default connection limit is derived using a formula based on the `DBInstanceClassMemory` value. As shown in the preceding table, the formula produces connection limits that increase by 1000 as the memory doubles between progressively larger R3, R4, and R5 instances, and by 45 for different memory sizes of T2 and T3 instances.

See [Specifying DB parameters](USER_ParamValuesRef.md) for more details on how `DBInstanceClassMemory` is calculated.

Aurora MySQL and RDS for MySQL DB instances have different amounts of memory overhead. Therefore, the `max_connections` value can be different for Aurora MySQL and RDS for MySQL DB instances that use the same instance class. The values in the table only apply to Aurora MySQL DB instances.

**Note**  
The much lower connectivity limits for T2 and T3 instances are because with Aurora, those instance classes are intended only for development and test scenarios, not for production workloads.

The default connection limits are tuned for systems that use the default values for other major memory consumers, such as the buffer pool and query cache. If you change those other settings for your cluster, consider adjusting the connection limit to account for the increase or decrease in available memory on the DB instances.

## Temporary storage limits for Aurora MySQL
<a name="AuroraMySQL.Managing.TempStorage"></a>

Aurora MySQL stores tables and indexes in the Aurora storage subsystem. Aurora MySQL uses separate temporary or local storage for nonpersistent temporary files and non-InnoDB temporary tables. Local storage also includes files that are used for such purposes as sorting large datasets during query processing or for index build operations. It doesn't include InnoDB temporary tables.

For more information on temporary tables in Aurora MySQL version 3, see [New temporary table behavior in Aurora MySQL version 3](ams3-temptable-behavior.md). For more information on temporary tables in version 2, see [Temporary tablespace behavior in Aurora MySQL version 2](AuroraMySQL.CompareMySQL57.md#AuroraMySQL.TempTables57).

The data and temporary files on these volumes are lost when starting and stopping the DB instance, and during host replacement.

These local storage volumes are backed by Amazon Elastic Block Store (EBS) and can be extended by using a larger DB instance class. For more information about storage, see [Amazon Aurora storage](Aurora.Overview.StorageReliability.md).

Local storage is also used for importing data from Amazon S3 using `LOAD DATA FROM S3` or `LOAD XML FROM S3`, and for exporting data to S3 using SELECT INTO OUTFILE S3. For more information on importing from and exporting to S3, see the following:
+ [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md)
+ [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md)

Aurora MySQL uses separate permanent storage for error logs, general logs, slow query logs, and audit logs for most of the Aurora MySQL DB instance classes (not including burstable-performance instance class types such as db.t2, db.t3, and db.t4g). The data on this volume is retained when starting and stopping the DB instance, and during host replacement.

This permanent storage volume is also backed by Amazon EBS and has a fixed size according to the DB instance class. It can't be extended by using a larger DB instance class.

The following table shows the maximum amount of temporary and permanent storage available for each Aurora MySQL DB instance class. For more information on DB instance class support for Aurora, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).


| DB instance class | Maximum temporary/local storage available (GiB) | Additional maximum storage available for log files (GiB) | 
| --- | --- | --- | 
| db.x2g.16xlarge | 1280 | 500 | 
| db.x2g.12xlarge | 960 | 500 | 
| db.x2g.8xlarge | 640 | 500 | 
| db.x2g.4xlarge | 320 | 500 | 
| db.x2g.2xlarge | 160 | 60 | 
| db.x2g.xlarge | 80 | 60 | 
| db.x2g.large | 40 | 60 | 
| db.r8g.48xlarge | 3840 | 500 | 
| db.r8g.24xlarge | 1920 | 500 | 
| db.r8g.16xlarge | 1280 | 500 | 
| db.r8g.12xlarge | 960 | 500 | 
| db.r8g.8xlarge | 640 | 500 | 
| db.r8g.4xlarge | 320 | 500 | 
| db.r8g.2xlarge | 160 | 60 | 
| db.r8g.xlarge | 80 | 60 | 
| db.r8g.large | 32 | 60 | 
| db.r7i.48xlarge | 3840 | 500 | 
| db.r7i.24xlarge | 1920 | 500 | 
| db.r7i.16xlarge | 1280 | 500 | 
| db.r7i.12xlarge | 960 | 500 | 
| db.r7i.8xlarge | 640 | 500 | 
| db.r7i.4xlarge | 320 | 500 | 
| db.r7i.2xlarge | 160 | 60 | 
| db.r7i.xlarge | 80 | 60 | 
| db.r7i.large | 32 | 60 | 
| db.r7g.16xlarge | 1280 | 500 | 
| db.r7g.12xlarge | 960 | 500 | 
| db.r7g.8xlarge | 640 | 500 | 
| db.r7g.4xlarge | 320 | 500 | 
| db.r7g.2xlarge | 160 | 60 | 
| db.r7g.xlarge | 80 | 60 | 
| db.r7g.large | 32 | 60 | 
| db.r6i.32xlarge | 2560 | 500 | 
| db.r6i.24xlarge | 1920 | 500 | 
| db.r6i.16xlarge | 1280 | 500 | 
| db.r6i.12xlarge | 960 | 500 | 
| db.r6i.8xlarge | 640 | 500 | 
| db.r6i.4xlarge | 320 | 500 | 
| db.r6i.2xlarge | 160 | 60 | 
| db.r6i.xlarge | 80 | 60 | 
| db.r6i.large | 32 | 60 | 
| db.r6g.16xlarge | 1280 | 500 | 
| db.r6g.12xlarge | 960 | 500 | 
| db.r6g.8xlarge | 640 | 500 | 
| db.r6g.4xlarge | 320 | 500 | 
| db.r6g.2xlarge | 160 | 60 | 
| db.r6g.xlarge | 80 | 60 | 
| db.r6g.large | 32 | 60 | 
| db.r5.24xlarge | 1920 | 500 | 
| db.r5.16xlarge | 1280 | 500 | 
| db.r5.12xlarge | 960 | 500 | 
| db.r5.8xlarge | 640 | 500 | 
| db.r5.4xlarge | 320 | 500 | 
| db.r5.2xlarge | 160 | 60 | 
| db.r5.xlarge | 80 | 60 | 
| db.r5.large | 32 | 60 | 
| db.r4.16xlarge | 1280 | 500 | 
| db.r4.8xlarge | 640 | 500 | 
| db.r4.4xlarge | 320 | 500 | 
| db.r4.2xlarge | 160 | 60 | 
| db.r4.xlarge | 80 | 60 | 
| db.r4.large | 32 | 60 | 
| db.t4g.large | 32 | – | 
| db.t4g.medium | 32 | – | 
| db.t3.large | 32 | – | 
| db.t3.medium | 32 | – | 
| db.t3.small | 32 | – | 
| db.t2.medium | 32 | – | 
| db.t2.small | 32 | – | 

**Important**  
 These values represent the theoretical maximum amount of free storage on each DB instance. The actual local storage available to you might be lower. Aurora uses some local storage for its management processes, and the DB instance uses some local storage even before you load any data. You can monitor the temporary storage available for a specific DB instance with the `FreeLocalStorage` CloudWatch metric, described in [Amazon CloudWatch metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md). You can check the amount of free storage at the present time. You can also chart the amount of free storage over time. Monitoring the free storage over time helps you to determine whether the value is increasing or decreasing, or to find the minimum, maximum, or average values.  
(This doesn't apply to Aurora Serverless v2.)

# Backtracking an Aurora DB cluster
<a name="AuroraMySQL.Managing.Backtrack"></a>

With Amazon Aurora MySQL-Compatible Edition, you can backtrack a DB cluster to a specific time, without restoring data from a backup.

**Contents**
+ [Overview of backtracking](#AuroraMySQL.Managing.Backtrack.Overview)
  + [Backtrack window](#AuroraMySQL.Managing.Backtrack.Overview.BacktrackWindow)
  + [Backtracking time](#AuroraMySQL.Managing.Backtrack.Overview.BacktrackTime)
  + [Backtracking limitations](#AuroraMySQL.Managing.Backtrack.Limitations)
+ [Region and version availability](#AuroraMySQL.Managing.Backtrack.Availability)
+ [Upgrade considerations for backtrack-enabled clusters](#AuroraMySQL.Managing.Backtrack.Upgrade)
+ [Configuring backtracking a Aurora MySQL DB cluster](AuroraMySQL.Managing.Backtrack.Configuring.md)
+ [Performing a backtrack for an Aurora MySQL DB cluster](AuroraMySQL.Managing.Backtrack.Performing0.md)
+ [Monitoring backtracking for an Aurora MySQL DB cluster](AuroraMySQL.Managing.Backtrack.Monitoring.md)
+ [Subscribing to a backtrack event with the console](#AuroraMySQL.Managing.Backtrack.Event.Console)
+ [Retrieving existing backtracks](#AuroraMySQL.Managing.Backtrack.Retrieving)
+ [Disabling backtracking for an Aurora MySQL DB cluster](AuroraMySQL.Managing.Backtrack.Disabling.md)

## Overview of backtracking
<a name="AuroraMySQL.Managing.Backtrack.Overview"></a>

Backtracking "rewinds" the DB cluster to the time you specify. Backtracking is not a replacement for backing up your DB cluster so that you can restore it to a point in time. However, backtracking provides the following advantages over traditional backup and restore:
+ You can easily undo mistakes. If you mistakenly perform a destructive action, such as a DELETE without a WHERE clause, you can backtrack the DB cluster to a time before the destructive action with minimal interruption of service.
+ You can backtrack a DB cluster quickly. Restoring a DB cluster to a point in time launches a new DB cluster and restores it from backup data or a DB cluster snapshot, which can take hours. Backtracking a DB cluster doesn't require a new DB cluster and rewinds the DB cluster in minutes.
+ You can explore earlier data changes. You can repeatedly backtrack a DB cluster back and forth in time to help determine when a particular data change occurred. For example, you can backtrack a DB cluster three hours and then backtrack forward in time one hour. In this case, the backtrack time is two hours before the original time.

**Note**  
For information about restoring a DB cluster to a point in time, see [Overview of backing up and restoring an Aurora DB cluster](Aurora.Managing.Backups.md).

### Backtrack window
<a name="AuroraMySQL.Managing.Backtrack.Overview.BacktrackWindow"></a>

With backtracking, there is a target backtrack window and an actual backtrack window:
+ The *target backtrack window* is the amount of time you want to be able to backtrack your DB cluster. When you enable backtracking, you specify a *target backtrack window*. For example, you might specify a target backtrack window of 24 hours if you want to be able to backtrack the DB cluster one day.
+ The *actual backtrack window* is the actual amount of time you can backtrack your DB cluster, which can be smaller than the target backtrack window. The actual backtrack window is based on your workload and the storage available for storing information about database changes, called *change records.*

As you make updates to your Aurora DB cluster with backtracking enabled, you generate change records. Aurora retains change records for the target backtrack window, and you pay an hourly rate for storing them. Both the target backtrack window and the workload on your DB cluster determine the number of change records you store. The workload is the number of changes you make to your DB cluster in a given amount of time. If your workload is heavy, you store more change records in your backtrack window than you do if your workload is light.

You can think of your target backtrack window as the goal for the maximum amount of time you want to be able to backtrack your DB cluster. In most cases, you can backtrack the maximum amount of time that you specified. However, in some cases, the DB cluster can't store enough change records to backtrack the maximum amount of time, and your actual backtrack window is smaller than your target. Typically, the actual backtrack window is smaller than the target when you have extremely heavy workload on your DB cluster. When your actual backtrack window is smaller than your target, we send you a notification.

When backtracking is enabled for a DB cluster, and you delete a table stored in the DB cluster, Aurora keeps that table in the backtrack change records. It does this so that you can revert back to a time before you deleted the table. If you don't have enough space in your backtrack window to store the table, the table might be removed from the backtrack change records eventually.

### Backtracking time
<a name="AuroraMySQL.Managing.Backtrack.Overview.BacktrackTime"></a>

Aurora always backtracks to a time that is consistent for the DB cluster. Doing so eliminates the possibility of uncommitted transactions when the backtrack is complete. When you specify a time for a backtrack, Aurora automatically chooses the nearest possible consistent time. This approach means that the completed backtrack might not exactly match the time you specify, but you can determine the exact time for a backtrack by using the [describe-db-cluster-backtracks](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-cluster-backtracks.html) AWS CLI command. For more information, see [Retrieving existing backtracks](#AuroraMySQL.Managing.Backtrack.Retrieving).

### Backtracking limitations
<a name="AuroraMySQL.Managing.Backtrack.Limitations"></a>

The following limitations apply to backtracking:
+ Backtracking is only available for DB clusters that were created with the Backtrack feature enabled. You can't modify a DB cluster to enable the Backtrack feature. You can enable the Backtrack feature when you create a new DB cluster or restore a snapshot of a DB cluster.
+ The limit for a backtrack window is 72 hours.
+ Backtracking affects the entire DB cluster. For example, you can't selectively backtrack a single table or a single data update.
+ You can't create cross-Region read replicas from a backtrack-enabled cluster, but you can still enable binary log (binlog) replication on the cluster. If you try to backtrack a DB cluster for which binary logging is enabled, an error typically occurs unless you choose to force the backtrack. Any attempts to force a backtrack will break downstream read replicas and interfere with other operations such as blue/green deployments.
+ You can't backtrack a database clone to a time before that database clone was created. However, you can use the original database to backtrack to a time before the clone was created. For more information about database cloning, see [Cloning a volume for an Amazon Aurora DB cluster](Aurora.Managing.Clone.md).
+ Backtracking causes a brief DB instance disruption. You must stop or pause your applications before starting a backtrack operation to ensure that there are no new read or write requests. During the backtrack operation, Aurora pauses the database, closes any open connections, and drops any uncommitted reads and writes. It then waits for the backtrack operation to complete.
+ You can't restore a cross-Region snapshot of a backtrack-enabled cluster in an AWS Region that doesn't support backtracking.
+ If you perform an in-place upgrade for a backtrack-enabled cluster from Aurora MySQL version 2 to version 3, you can't backtrack to a point in time before the upgrade happened.

## Region and version availability
<a name="AuroraMySQL.Managing.Backtrack.Availability"></a>

Backtrack is not available for Aurora PostgreSQL.

Following are the supported engines and Region availability for Backtrack with Aurora MySQL.


| Region | Aurora MySQL version 3 | Aurora MySQL version 2 | 
| --- | --- | --- | 
| US East (N. Virginia) | All versions | All versions | 
| US East (Ohio) | All versions | All versions | 
| US West (N. California) | All versions | All versions | 
| US West (Oregon) | All versions | All versions | 
| Africa (Cape Town) | – | – | 
| Asia Pacific (Hong Kong) | – | – | 
| Asia Pacific (Jakarta) | – | – | 
| Asia Pacific (Malaysia) | – | – | 
| Asia Pacific (Melbourne) | – | – | 
| Asia Pacific (Mumbai) | All versions | All versions | 
| Asia Pacific (New Zealand) | – | – | 
| Asia Pacific (Osaka) | All versions | Version 2.07.3 and higher | 
| Asia Pacific (Seoul) | All versions | All versions | 
| Asia Pacific (Singapore) | All versions | All versions | 
| Asia Pacific (Sydney) | All versions | All versions | 
| Asia Pacific (Taipei) | – | – | 
| Asia Pacific (Thailand) | – | – | 
| Asia Pacific (Tokyo) | All versions | All versions | 
| Canada (Central) | All versions | All versions | 
| Canada West (Calgary) | – | – | 
| China (Beijing) | – | – | 
| China (Ningxia) | – | – | 
| Europe (Frankfurt) | All versions | All versions | 
| Europe (Ireland) | All versions | All versions | 
| Europe (London) | All versions | All versions | 
| Europe (Milan) | – | – | 
| Europe (Paris) | All versions | All versions | 
| Europe (Spain) | – | – | 
| Europe (Stockholm) | – | – | 
| Europe (Zurich) | – | – | 
| Israel (Tel Aviv) | – | – | 
| Mexico (Central) | – | – | 
| Middle East (Bahrain) | – | – | 
| Middle East (UAE) | – | – | 
| South America (São Paulo) | – | – | 
| AWS GovCloud (US-East) | – | – | 
| AWS GovCloud (US-West) | – | – | 

## Upgrade considerations for backtrack-enabled clusters
<a name="AuroraMySQL.Managing.Backtrack.Upgrade"></a>

You can upgrade a backtrack-enabled DB cluster from Aurora MySQL version 2 to version 3, because all minor versions of Aurora MySQL version 3 are supported for Backtrack.

# Configuring backtracking a Aurora MySQL DB cluster
<a name="AuroraMySQL.Managing.Backtrack.Configuring"></a>

To use the Backtrack feature, you must enable backtracking and specify a target backtrack window. Otherwise, backtracking is disabled.

For the target backtrack window, specify the amount of time that you want to be able to rewind your database using Backtrack. Aurora tries to retain enough change records to support that window of time.

## Console
<a name="AuroraMySQL.Managing.Backtrack.Configuring.Console"></a>

You can use the console to configure backtracking when you create a new DB cluster. You can also modify a DB cluster to change the backtrack window for a backtrack-enabled cluster. If you turn off backtracking entirely for a cluster by setting the backtrack window to 0, you can't enable backtrack again for that cluster.

**Topics**
+ [Configuring backtracking with the console when creating a DB cluster](#AuroraMySQL.Managing.Backtrack.Configuring.Console.Creating)
+ [Configuring backtrack with the console when modifying a DB cluster](#AuroraMySQL.Managing.Backtrack.Configuring.Console.Modifying)

### Configuring backtracking with the console when creating a DB cluster
<a name="AuroraMySQL.Managing.Backtrack.Configuring.Console.Creating"></a>

When you create a new Aurora MySQL DB cluster, backtracking is configured when you choose **Enable Backtrack** and specify a **Target Backtrack window** value that is greater than zero in the **Backtrack** section.

To create a DB cluster, follow the instructions in [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). The following image shows the **Backtrack** section.

![\[Enable Backtrack during DB cluster creation with console\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-create.png)


When you create a new DB cluster, Aurora has no data for the DB cluster's workload. So it can't estimate a cost specifically for the new DB cluster. Instead, the console presents a typical user cost for the specified target backtrack window based on a typical workload. The typical cost is meant to provide a general reference for the cost of the Backtrack feature.

**Important**  
Your actual cost might not match the typical cost, because your actual cost is based on your DB cluster's workload.

### Configuring backtrack with the console when modifying a DB cluster
<a name="AuroraMySQL.Managing.Backtrack.Configuring.Console.Modifying"></a>

You can modify backtracking for a DB cluster using the console.

**Note**  
Currently, you can modify backtracking only for a DB cluster that has the Backtrack feature enabled. The **Backtrack** section doesn't appear for a DB cluster that was created with the Backtrack feature disabled or if the Backtrack feature has been disabled for the DB cluster.

**To modify backtracking for a DB cluster using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the cluster that you want to modify, and choose **Modify**.

1. For **Target Backtrack window**, modify the amount of time that you want to be able to backtrack. The limit is 72 hours.  
![\[Modify Backtrack with console\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-modify.png)

   The console shows the estimated cost for the amount of time you specified based on the DB cluster's past workload:
   + If backtracking was disabled on the DB cluster, the cost estimate is based on the `VolumeWriteIOPS` metric for the DB cluster in Amazon CloudWatch.
   + If backtracking was enabled previously on the DB cluster, the cost estimate is based on the `BacktrackChangeRecordsCreationRate` metric for the DB cluster in Amazon CloudWatch.

1. Choose **Continue**.

1. For **Scheduling of Modifications**, choose one of the following:
   + **Apply during the next scheduled maintenance window** – Wait to apply the **Target Backtrack window** modification until the next maintenance window.
   + **Apply immediately** – Apply the **Target Backtrack window** modification as soon as possible.

1. Choose **Modify cluster**.

## AWS CLI
<a name="AuroraMySQL.Managing.Backtrack.Configuring.CLI"></a>

When you create a new Aurora MySQL DB cluster using the [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) AWS CLI command, backtracking is configured when you specify a `--backtrack-window` value that is greater than zero. The `--backtrack-window` value specifies the target backtrack window. For more information, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

You can also specify the `--backtrack-window` value using the following AWS CLI commands:
+  [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) 
+  [restore-db-cluster-from-s3](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-s3.html) 
+  [restore-db-cluster-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html) 
+  [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html) 

The following procedure describes how to modify the target backtrack window for a DB cluster using the AWS CLI.

**To modify the target backtrack window for a DB cluster using the AWS CLI**
+ Call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.
  + `--backtrack-window` – The maximum number of seconds that you want to be able to backtrack the DB cluster.

  The following example sets the target backtrack window for `sample-cluster` to one day (86,400 seconds).

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-cluster \
      --db-cluster-identifier sample-cluster \
      --backtrack-window 86400
  ```

  For Windows:

  ```
  aws rds modify-db-cluster ^
      --db-cluster-identifier sample-cluster ^
      --backtrack-window 86400
  ```

**Note**  
Currently, you can enable backtracking only for a DB cluster that was created with the Backtrack feature enabled.

## RDS API
<a name="AuroraMySQL.Managing.Backtrack.Configuring.API"></a>

When you create a new Aurora MySQL DB cluster using the [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) Amazon RDS API operation, backtracking is configured when you specify a `BacktrackWindow` value that is greater than zero. The `BacktrackWindow` value specifies the target backtrack window for the DB cluster specified in the `DBClusterIdentifier` value. For more information, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

You can also specify the `BacktrackWindow` value using the following API operations:
+  [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) 
+  [RestoreDBClusterFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html) 
+  [RestoreDBClusterFromSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html) 
+  [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html) 

**Note**  
Currently, you can enable backtracking only for a DB cluster that was created with the Backtrack feature enabled.

# Performing a backtrack for an Aurora MySQL DB cluster
<a name="AuroraMySQL.Managing.Backtrack.Performing0"></a>

You can backtrack a DB cluster to a specified backtrack time stamp. If the backtrack time stamp isn't earlier than the earliest possible backtrack time, and isn't in the future, the DB cluster is backtracked to that time stamp. 

Otherwise, an error typically occurs. Also, if you try to backtrack a DB cluster for which binary logging is enabled, an error typically occurs unless you've chosen to force the backtrack to occur. Forcing a backtrack to occur can interfere with other operations that use binary logging.

**Important**  
Backtracking doesn't generate binlog entries for the changes that it makes. If you have binary logging enabled for the DB cluster, backtracking might not be compatible with your binlog implementation.

**Note**  
For database clones, you can't backtrack the DB cluster earlier than the date and time when the clone was created. For more information about database cloning, see [Cloning a volume for an Amazon Aurora DB cluster](Aurora.Managing.Clone.md).

## Console
<a name="AuroraMySQL.Managing.Backtrack.Performing.Console"></a>

The following procedure describes how to perform a backtrack operation for a DB cluster using the console.

**To perform a backtrack operation using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Instances**.

1. Choose the primary instance for the DB cluster that you want to backtrack.

1. For **Actions**, choose **Backtrack DB cluster**.

1. On the **Backtrack DB cluster** page, enter the backtrack time stamp to backtrack the DB cluster to.  
![\[Backtrack DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-db-cluster.png)

1. Choose **Backtrack DB cluster**.

## AWS CLI
<a name="AuroraMySQL.Managing.Backtrack.Performing.CLI"></a>

The following procedure describes how to backtrack a DB cluster using the AWS CLI.

**To backtrack a DB cluster using the AWS CLI**
+ Call the [backtrack-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/backtrack-db-cluster.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.
  + `--backtrack-to` – The backtrack time stamp to backtrack the DB cluster to, specified in ISO 8601 format.

  The following example backtracks the DB cluster `sample-cluster` to March 19, 2018, at 10 a.m.

  For Linux, macOS, or Unix:

  ```
  aws rds backtrack-db-cluster \
      --db-cluster-identifier sample-cluster \
      --backtrack-to 2018-03-19T10:00:00+00:00
  ```

  For Windows:

  ```
  aws rds backtrack-db-cluster ^
      --db-cluster-identifier sample-cluster ^
      --backtrack-to 2018-03-19T10:00:00+00:00
  ```

## RDS API
<a name="AuroraMySQL.Managing.Backtrack.Performing.API"></a>

To backtrack a DB cluster using the Amazon RDS API, use the [BacktrackDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_BacktrackDBCluster.html) operation. This operation backtracks the DB cluster specified in the `DBClusterIdentifier` value to the specified time.

# Monitoring backtracking for an Aurora MySQL DB cluster
<a name="AuroraMySQL.Managing.Backtrack.Monitoring"></a>

You can view backtracking information and monitor backtracking metrics for a DB cluster.

## Console
<a name="AuroraMySQL.Managing.Backtrack.Describing.Console"></a>

**To view backtracking information and monitor backtracking metrics using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the DB cluster name to open information about it.

   The backtrack information is in the **Backtrack** section.  
![\[Backtrack details for a DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-details.png)

   When backtracking is enabled, the following information is available:
   + **Target window** – The current amount of time specified for the target backtrack window. The target is the maximum amount of time that you can backtrack if there is sufficient storage.
   + **Actual window** – The actual amount of time you can backtrack, which can be smaller than the target backtrack window. The actual backtrack window is based on your workload and the storage available for retaining backtrack change records.
   + **Earliest backtrack time** – The earliest possible backtrack time for the DB cluster. You can't backtrack the DB cluster to a time before the displayed time.

1. Do the following to view backtracking metrics for the DB cluster:

   1. In the navigation pane, choose **Instances**.

   1. Choose the name of the primary instance for the DB cluster to display its details.

   1. In the **CloudWatch** section, type **Backtrack** into the **CloudWatch** box to show only the Backtrack metrics.  
![\[Backtrack metrics\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-metrics.png)

      The following metrics are displayed:
      + **Backtrack Change Records Creation Rate (Count)** – This metric shows the number of backtrack change records created over five minutes for your DB cluster. You can use this metric to estimate the backtrack cost for your target backtrack window.
      + **[Billed] Backtrack Change Records Stored (Count)** – This metric shows the actual number of backtrack change records used by your DB cluster.
      + **Backtrack Window Actual (Minutes)** – This metric shows whether there is a difference between the target backtrack window and the actual backtrack window. For example, if your target backtrack window is 2 hours (120 minutes), and this metric shows that the actual backtrack window is 100 minutes, then the actual backtrack window is smaller than the target.
      + **Backtrack Window Alert (Count)** – This metric shows how often the actual backtrack window is smaller than the target backtrack window for a given period of time.
**Note**  
The following metrics might lag behind the current time:  
**Backtrack Change Records Creation Rate (Count)**
**[Billed] Backtrack Change Records Stored (Count)**

## AWS CLI
<a name="AuroraMySQL.Managing.Backtrack.Describing.CLI"></a>

The following procedure describes how to view backtrack information for a DB cluster using the AWS CLI.

**To view backtrack information for a DB cluster using the AWS CLI**
+ Call the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.

  The following example lists backtrack information for `sample-cluster`.

  For Linux, macOS, or Unix:

  ```
  aws rds describe-db-clusters \
      --db-cluster-identifier sample-cluster
  ```

  For Windows:

  ```
  aws rds describe-db-clusters ^
      --db-cluster-identifier sample-cluster
  ```

## RDS API
<a name="AuroraMySQL.Managing.Backtrack.Describing.API"></a>

To view backtrack information for a DB cluster using the Amazon RDS API, use the [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) operation. This operation returns backtrack information for the DB cluster specified in the `DBClusterIdentifier` value.

## Subscribing to a backtrack event with the console
<a name="AuroraMySQL.Managing.Backtrack.Event.Console"></a>

The following procedure describes how to subscribe to a backtrack event using the console. The event sends you an email or text notification when your actual backtrack window is smaller than your target backtrack window.

**To view backtrack information using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Event subscriptions**.

1. Choose **Create event subscription**.

1. In the **Name** box, type a name for the event subscription, and ensure that **Yes** is selected for **Enabled**.

1. In the **Target** section, choose **New email topic**.

1. For **Topic name**, type a name for the topic, and for **With these recipients**, enter the email addresses or phone numbers to receive the notifications.

1. In the **Source** section, choose **Instances** for **Source type**.

1. For **Instances to include**, choose **Select specific instances**, and choose your DB instance.

1. For **Event categories to include**, choose **Select specific event categories**, and choose **backtrack**.

   Your page should look similar to the following page.  
![\[Backtrack event subscription\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-backtrack-event.png)

1. Choose **Create**.

## Retrieving existing backtracks
<a name="AuroraMySQL.Managing.Backtrack.Retrieving"></a>

You can retrieve information about existing backtracks for a DB cluster. This information includes the unique identifier of the backtrack, the date and time backtracked to and from, the date and time the backtrack was requested, and the current status of the backtrack.

**Note**  
Currently, you can't retrieve existing backtracks using the console.

### AWS CLI
<a name="AuroraMySQL.Managing.Backtrack.Retrieving.CLI"></a>

The following procedure describes how to retrieve existing backtracks for a DB cluster using the AWS CLI.

**To retrieve existing backtracks using the AWS CLI**
+ Call the [describe-db-cluster-backtracks](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-cluster-backtracks.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.

  The following example retrieves existing backtracks for `sample-cluster`.

  For Linux, macOS, or Unix:

  ```
  aws rds describe-db-cluster-backtracks \
      --db-cluster-identifier sample-cluster
  ```

  For Windows:

  ```
  aws rds describe-db-cluster-backtracks ^
      --db-cluster-identifier sample-cluster
  ```

### RDS API
<a name="AuroraMySQL.Managing.Backtrack.Retrieving.API"></a>

To retrieve information about the backtracks for a DB cluster using the Amazon RDS API, use the [DescribeDBClusterBacktracks](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusterBacktracks.html) operation. This operation returns information about backtracks for the DB cluster specified in the `DBClusterIdentifier` value.

# Disabling backtracking for an Aurora MySQL DB cluster
<a name="AuroraMySQL.Managing.Backtrack.Disabling"></a>

You can disable the Backtrack feature for a DB cluster.

## Console
<a name="AuroraMySQL.Managing.Backtrack.Disabling.Console"></a>

You can disable backtracking for a DB cluster using the console. After you turn off backtracking entirely for a cluster, you can't enable it again for that cluster.

**To disable the Backtrack feature for a DB cluster using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the cluster you want to modify, and choose **Modify**.

1. In the **Backtrack** section, choose **Disable Backtrack**.

1. Choose **Continue**.

1. For **Scheduling of Modifications**, choose one of the following:
   + **Apply during the next scheduled maintenance window** – Wait to apply the modification until the next maintenance window.
   + **Apply immediately** – Apply the modification as soon as possible.

1. Choose **Modify Cluster**.

## AWS CLI
<a name="AuroraMySQL.Managing.Backtrack.Disabling.CLI"></a>

You can disable the Backtrack feature for a DB cluster using the AWS CLI by setting the target backtrack window to `0` (zero). After you turn off backtracking entirely for a cluster, you can't enable it again for that cluster.

**To modify the target backtrack window for a DB cluster using the AWS CLI**
+ Call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.
  + `--backtrack-window` – specify `0` to turn off backtracking.

  The following example disables the Backtrack feature for the `sample-cluster` by setting `--backtrack-window` to `0`.

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-cluster \
      --db-cluster-identifier sample-cluster \
      --backtrack-window 0
  ```

  For Windows:

  ```
  aws rds modify-db-cluster ^
      --db-cluster-identifier sample-cluster ^
      --backtrack-window 0
  ```

## RDS API
<a name="AuroraMySQL.Managing.Backtrack.Disabling.API"></a>

To disable the Backtrack feature for a DB cluster using the Amazon RDS API, use the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation. Set the `BacktrackWindow` value to `0` (zero), and specify the DB cluster in the `DBClusterIdentifier` value. After you turn off backtracking entirely for a cluster, you can't enable it again for that cluster.

# Testing Amazon Aurora MySQL using fault injection queries
<a name="AuroraMySQL.Managing.FaultInjectionQueries"></a>

You can test the fault tolerance of your Aurora MySQL DB cluster by using fault injection queries. Fault injection queries are issued as SQL commands to an Amazon Aurora instance. They let you schedule a simulated occurrence of one of the following events:
+ A crash of a writer or reader DB instance
+ A failure of an Aurora Replica
+ A disk failure
+ Disk congestion

When a fault injection query specifies a crash, it forces a crash of the Aurora MySQL DB instance. The other fault injection queries result in simulations of failure events, but don't cause the event to occur. When you submit a fault injection query, you also specify an amount of time for the failure event simulation to occur for.

You can submit a fault injection query to one of your Aurora Replica instances by connecting to the endpoint for the Aurora Replica. For more information, see [Amazon Aurora endpoint connections](Aurora.Overview.Endpoints.md).

Running fault injection queries requires all of the master user privileges. For more information, see [Master user account privileges](UsingWithRDS.MasterAccounts.md).

## Testing an instance crash
<a name="AuroraMySQL.Managing.FaultInjectionQueries.Crash"></a>

You can force a crash of an Amazon Aurora instance using the `ALTER SYSTEM CRASH` fault injection query.

For this fault injection query, a failover will not occur. If you want to test a failover, then you can choose the **Failover** instance action for your DB cluster in the RDS console, or use the [failover-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/failover-db-cluster.html) AWS CLI command or the [FailoverDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_FailoverDBCluster.html) RDS API operation.

### Syntax
<a name="AuroraMySQL.Managing.FaultInjectionQueries.Crash-Syntax"></a>

```
1. ALTER SYSTEM CRASH [ INSTANCE | DISPATCHER | NODE ];
```

### Options
<a name="AuroraMySQL.Managing.FaultInjectionQueries.Crash-Options"></a>

This fault injection query takes one of the following crash types:
+ **`INSTANCE`** — A crash of the MySQL-compatible database for the Amazon Aurora instance is simulated.
+ **`DISPATCHER`** — A crash of the dispatcher on the writer instance for the Aurora DB cluster is simulated. The *dispatcher* writes updates to the cluster volume for an Amazon Aurora DB cluster.
+ **`NODE`** — A crash of both the MySQL-compatible database and the dispatcher for the Amazon Aurora instance is simulated. For this fault injection simulation, the cache is also deleted.

The default crash type is `INSTANCE`.

## Testing an Aurora replica failure
<a name="AuroraMySQL.Managing.FaultInjectionQueries.ReplicaFailure"></a>

You can simulate the failure of an Aurora Replica using the `ALTER SYSTEM SIMULATE READ REPLICA FAILURE` fault injection query.

An Aurora Replica failure blocks all requests from the writer instance to an Aurora Replica or all Aurora Replicas in the DB cluster for a specified time interval. When the time interval completes, the affected Aurora Replicas will be automatically synced up with the writer instance. 

### Syntax
<a name="AuroraMySQL.Managing.FaultInjectionQueries.ReplicaFailure-Syntax"></a>

```
1. ALTER SYSTEM SIMULATE percentage_of_failure PERCENT READ REPLICA FAILURE
2.     [ TO ALL | TO "replica name" ]
3.     FOR INTERVAL quantity { YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND };
```

### Options
<a name="AuroraMySQL.Managing.FaultInjectionQueries.ReplicaFailure-Options"></a>

This fault injection query takes the following parameters:
+ **`percentage_of_failure`** — The percentage of requests to block during the failure event. This value can be a double between 0 and 100. If you specify 0, then no requests are blocked. If you specify 100, then all requests are blocked.
+ **Failure type** — The type of failure to simulate. Specify `TO ALL` to simulate failures for all Aurora Replicas in the DB cluster. Specify `TO` and the name of the Aurora Replica to simulate a failure of a single Aurora Replica. The default failure type is `TO ALL`.
+ **`quantity`** — The amount of time for which to simulate the Aurora Replica failure. The interval is an amount followed by a time unit. The simulation will occur for that amount of the specified unit. For example, `20 MINUTE` will result in the simulation running for 20 minutes.
**Note**  
Take care when specifying the time interval for your Aurora Replica failure event. If you specify too long of a time interval, and your writer instance writes a large amount of data during the failure event, then your Aurora DB cluster might assume that your Aurora Replica has crashed and replace it.

## Testing a disk failure
<a name="AuroraMySQL.Managing.FaultInjectionQueries.DiskFailure"></a>

You can simulate a disk failure for an Aurora DB cluster using the `ALTER SYSTEM SIMULATE DISK FAILURE` fault injection query.

During a disk failure simulation, the Aurora DB cluster randomly marks disk segments as faulting. Requests to those segments will be blocked for the duration of the simulation.

### Syntax
<a name="AuroraMySQL.Managing.FaultInjectionQueries.DiskFailure-Syntax"></a>

```
1. ALTER SYSTEM SIMULATE percentage_of_failure PERCENT DISK FAILURE
2.     [ IN DISK index | NODE index ]
3.     FOR INTERVAL quantity { YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND };
```

### Options
<a name="AuroraMySQL.Managing.FaultInjectionQueries.DiskFailure-Options"></a>

This fault injection query takes the following parameters:
+ **`percentage_of_failure`** — The percentage of the disk to mark as faulting during the failure event. This value can be a double between 0 and 100. If you specify 0, then none of the disk is marked as faulting. If you specify 100, then the entire disk is marked as faulting.
+ **`DISK index`** — A specific logical block of data to simulate the failure event for. If you exceed the range of available logical blocks of data, you will receive an error that tells you the maximum index value that you can specify. For more information, see [Displaying volume status for an Aurora MySQL DB cluster](AuroraMySQL.Managing.VolumeStatus.md).
+ **`NODE index`** — A specific storage node to simulate the failure event for. If you exceed the range of available storage nodes, you will receive an error that tells you the maximum index value that you can specify. For more information, see [Displaying volume status for an Aurora MySQL DB cluster](AuroraMySQL.Managing.VolumeStatus.md).
+ **`quantity`** — The amount of time for which to simulate the disk failure. The interval is an amount followed by a time unit. The simulation will occur for that amount of the specified unit. For example, `20 MINUTE` will result in the simulation running for 20 minutes.

## Testing disk congestion
<a name="AuroraMySQL.Managing.FaultInjectionQueries.DiskCongestion"></a>

You can simulate a disk failure for an Aurora DB cluster using the `ALTER SYSTEM SIMULATE DISK CONGESTION` fault injection query.

During a disk congestion simulation, the Aurora DB cluster randomly marks disk segments as congested. Requests to those segments will be delayed between the specified minimum and maximum delay time for the duration of the simulation.

### Syntax
<a name="AuroraMySQL.Managing.FaultInjectionQueries.DiskCongestion-Syntax"></a>

```
1. ALTER SYSTEM SIMULATE percentage_of_failure PERCENT DISK CONGESTION
2.     BETWEEN minimum AND maximum MILLISECONDS
3.     [ IN DISK index | NODE index ]
4.     FOR INTERVAL quantity { YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND };
```

### Options
<a name="AuroraMySQL.Managing.FaultInjectionQueries.DiskCongestion-Options"></a>

This fault injection query takes the following parameters:
+ **`percentage_of_failure`** — The percentage of the disk to mark as congested during the failure event. This value can be a double between 0 and 100. If you specify 0, then none of the disk is marked as congested. If you specify 100, then the entire disk is marked as congested.
+ **`DISK index` Or `NODE index`** — A specific disk or node to simulate the failure event for. If you exceed the range of indexes for the disk or node, you will receive an error that tells you the maximum index value that you can specify.
+ **`minimum` And `maximum`** — The minimum and maximum amount of congestion delay, in milliseconds. Disk segments marked as congested will be delayed for a random amount of time within the range of the minimum and maximum amount of milliseconds for the duration of the simulation.
+ **`quantity`** — The amount of time for which to simulate the disk congestion. The interval is an amount followed by a time unit. The simulation will occur for that amount of the specified time unit. For example, `20 MINUTE` will result in the simulation running for 20 minutes.

# Altering tables in Amazon Aurora using Fast DDL
<a name="AuroraMySQL.Managing.FastDDL"></a>

Amazon Aurora includes optimizations to run an `ALTER TABLE` operation in place, nearly instantaneously. The operation completes without requiring the table to be copied and without having a material impact on other DML statements. Because the operation doesn't consume temporary storage for a table copy, it makes DDL statements practical even for large tables on small instance classes.

Aurora MySQL version 3 is compatible with the MySQL 8.0 feature called instant DDL. Aurora MySQL version 2 uses a different implementation called Fast DDL.

**Topics**
+ [Instant DDL (Aurora MySQL version 3)](#AuroraMySQL.mysql80-instant-ddl)
+ [Fast DDL (Aurora MySQL version 2)](#AuroraMySQL.Managing.FastDDL-v2)

## Instant DDL (Aurora MySQL version 3)
<a name="AuroraMySQL.mysql80-instant-ddl"></a><a name="instant_ddl"></a>

 The optimization performed by Aurora MySQL version 3 to improve the efficiency of some DDL operations is called instant DDL. 

 Aurora MySQL version 3 is compatible with the instant DDL from community MySQL 8.0. You perform an instant DDL operation by using the clause `ALGORITHM=INSTANT` with the `ALTER TABLE` statement. For syntax and usage details about instant DDL, see [ALTER TABLE](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html) and [Online DDL Operations](https://dev.mysql.com/doc/refman/8.0/en/innodb-online-ddl-operations.html) in the MySQL documentation. 

 The following examples demonstrate the instant DDL feature. The `ALTER TABLE` statements add columns and change default column values. The examples include both regular and virtual columns, and both regular and partitioned tables. At each step, you can see the results by issuing `SHOW CREATE TABLE` and `DESCRIBE` statements. 

```
mysql> CREATE TABLE t1 (a INT, b INT, KEY(b)) PARTITION BY KEY(b) PARTITIONS 6;
Query OK, 0 rows affected (0.02 sec)

mysql> ALTER TABLE t1 RENAME TO t2, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 ALTER COLUMN b SET DEFAULT 100, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.00 sec)

mysql> ALTER TABLE t2 ALTER COLUMN b DROP DEFAULT, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 ADD COLUMN c ENUM('a', 'b', 'c'), ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 MODIFY COLUMN c ENUM('a', 'b', 'c', 'd', 'e'), ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER TABLE t2 ADD COLUMN (d INT GENERATED ALWAYS AS (a + 1) VIRTUAL), ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.02 sec)

mysql> ALTER TABLE t2 ALTER COLUMN a SET DEFAULT 20,
    ->   ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE t3 (a INT, b INT) PARTITION BY LIST(a)(
    ->   PARTITION mypart1 VALUES IN (1,3,5),
    ->   PARTITION MyPart2 VALUES IN (2,4,6)
    -> );
Query OK, 0 rows affected (0.03 sec)

mysql> ALTER TABLE t3 ALTER COLUMN a SET DEFAULT 20, ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE t4 (a INT, b INT) PARTITION BY RANGE(a)
    ->   (PARTITION p0 VALUES LESS THAN(100), PARTITION p1 VALUES LESS THAN(1000),
    ->   PARTITION p2 VALUES LESS THAN MAXVALUE);
Query OK, 0 rows affected (0.05 sec)

mysql> ALTER TABLE t4 ALTER COLUMN a SET DEFAULT 20,
    ->   ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)

/* Sub-partitioning example */
mysql> CREATE TABLE ts (id INT, purchased DATE, a INT, b INT)
    ->   PARTITION BY RANGE( YEAR(purchased) )
    ->     SUBPARTITION BY HASH( TO_DAYS(purchased) )
    ->     SUBPARTITIONS 2 (
    ->       PARTITION p0 VALUES LESS THAN (1990),
    ->       PARTITION p1 VALUES LESS THAN (2000),
    ->       PARTITION p2 VALUES LESS THAN MAXVALUE
    ->    );
Query OK, 0 rows affected (0.10 sec)

mysql> ALTER TABLE ts ALTER COLUMN a SET DEFAULT 20,
    ->   ALTER COLUMN b SET DEFAULT 200, ALGORITHM = INSTANT;
Query OK, 0 rows affected (0.01 sec)
```

## Fast DDL (Aurora MySQL version 2)
<a name="AuroraMySQL.Managing.FastDDL-v2"></a>

 <a name="fast_ddl"></a>

Fast DDL in Aurora MySQL is an optimization designed to improve the performance of certain schema changes, such as adding or dropping columns, by reducing downtime and resource usage. It allows these operations to be completed more efficiently compared to traditional DDL methods.

**Important**  
Currently, you must enable Aurora lab mode to use Fast DDL. For information about enabling lab mode, see [Amazon Aurora MySQL lab mode](AuroraMySQL.Updates.LabMode.md).  
The Fast DDL optimization was initially introduced in lab mode on Aurora MySQL version 2 to enhance the efficiency of certain DDL operations. In Aurora MySQL version 3, lab mode has been discontinued, and Fast DDL has been replaced by the MySQL 8.0 Instant DDL feature.

In MySQL, many data definition language (DDL) operations have a significant performance impact.

For example, suppose that you use an `ALTER TABLE` operation to add a column to a table. Depending on the algorithm specified for the operation, this operation can involve the following:
+ Creating a full copy of the table
+ Creating a temporary table to process concurrent data manipulation language (DML) operations
+ Rebuilding all indexes for the table
+ Applying table locks while applying concurrent DML changes
+ Slowing concurrent DML throughput

This performance impact can be particularly challenging in environments with large tables or high transaction volumes. Fast DDL helps mitigate these challenges by optimizing schema changes, which enables quicker and less resource-intensive operations.

### Fast DDL limitations
<a name="AuroraMySQL.FastDDL.Limitations"></a>

Currently, Fast DDL has the following limitations:
+ Fast DDL only supports adding nullable columns, without default values, to the end of an existing table.
+ Fast DDL doesn't work for partitioned tables.
+ Fast DDL doesn't work for InnoDB tables that use the REDUNDANT row format.
+  Fast DDL doesn't work for tables with full-text search indexes. 
+ If the maximum possible record size for the DDL operation is too large, Fast DDL is not used. A record size is too large if it is greater than half the page size. The maximum size of a record is computed by adding the maximum sizes of all columns. For variable sized columns, according to InnoDB standards, extern bytes are not included for computation.

### Fast DDL syntax
<a name="AuroraMySQL.FastDDL.Syntax"></a>

```
ALTER TABLE tbl_name ADD COLUMN col_name column_definition
```

This statement takes the following options:
+ **`tbl_name` — **The name of the table to be modified.
+ **`col_name` — **The name of the column to be added.
+ **`col_definition` — **The definition of the column to be added.
**Note**  
You must specify a nullable column definition without a default value. Otherwise, Fast DDL isn't used.

### Fast DDL examples
<a name="AuroraMySQL.FastDDL.Examples"></a>

 The following examples demonstrate the speedup from Fast DDL operations. The first SQL example runs `ALTER TABLE` statements on a large table without using Fast DDL. This operation takes substantial time. A CLI example shows how to enable Fast DDL for the cluster. Then another SQL example runs the same `ALTER TABLE` statements on an identical table. With Fast DDL enabled, the operation is very fast. 

 This example uses the `ORDERS` table from the TPC-H benchmark, containing 150 million rows. This cluster intentionally uses a relatively small instance class, to demonstrate how long `ALTER TABLE` statements can take when you can't use Fast DDL. The example creates a clone of the original table containing identical data. Checking the `aurora_lab_mode` setting confirms that the cluster can't use Fast DDL, because lab mode isn't enabled. Then `ALTER TABLE ADD COLUMN` statements take substantial time to add new columns at the end of the table. 

```
mysql> create table orders_regular_ddl like orders;
Query OK, 0 rows affected (0.06 sec)

mysql> insert into orders_regular_ddl select * from orders;
Query OK, 150000000 rows affected (1 hour 1 min 25.46 sec)

mysql> select @@aurora_lab_mode;
+-------------------+
| @@aurora_lab_mode |
+-------------------+
|                 0 |
+-------------------+

mysql> ALTER TABLE orders_regular_ddl ADD COLUMN o_refunded boolean;
Query OK, 0 rows affected (40 min 31.41 sec)

mysql> ALTER TABLE orders_regular_ddl ADD COLUMN o_coverletter varchar(512);
Query OK, 0 rows affected (40 min 44.45 sec)
```

 This example does the same preparation of a large table as the previous example. However, you can't simply enable lab mode within an interactive SQL session. That setting must be enabled in a custom parameter group. Doing so requires switching out of the `mysql` session and running some AWS CLI commands or using the AWS Management Console. 

```
mysql> create table orders_fast_ddl like orders;
Query OK, 0 rows affected (0.02 sec)

mysql> insert into orders_fast_ddl select * from orders;
Query OK, 150000000 rows affected (58 min 3.25 sec)

mysql> set aurora_lab_mode=1;
ERROR 1238 (HY000): Variable 'aurora_lab_mode' is a read only variable
```

 Enabling lab mode for the cluster requires some work with a parameter group. This AWS CLI example uses a cluster parameter group, to ensure that all DB instances in the cluster use the same value for the lab mode setting. 

```
$ aws rds create-db-cluster-parameter-group \
  --db-parameter-group-family aurora5.7 \
    --db-cluster-parameter-group-name lab-mode-enabled-57 --description 'TBD'
$ aws rds describe-db-cluster-parameters \
  --db-cluster-parameter-group-name lab-mode-enabled-57 \
    --query '*[*].[ParameterName,ParameterValue]' \
      --output text | grep aurora_lab_mode
aurora_lab_mode 0
$ aws rds modify-db-cluster-parameter-group \
  --db-cluster-parameter-group-name lab-mode-enabled-57 \
    --parameters ParameterName=aurora_lab_mode,ParameterValue=1,ApplyMethod=pending-reboot
{
    "DBClusterParameterGroupName": "lab-mode-enabled-57"
}

# Assign the custom parameter group to the cluster that's going to use Fast DDL.
$ aws rds modify-db-cluster --db-cluster-identifier tpch100g \
  --db-cluster-parameter-group-name lab-mode-enabled-57
{
  "DBClusterIdentifier": "tpch100g",
  "DBClusterParameterGroup": "lab-mode-enabled-57",
  "Engine": "aurora-mysql",
  "EngineVersion": "5.7.mysql_aurora.2.10.2",
  "Status": "available"
}

# Reboot the primary instance for the cluster tpch100g:
$ aws rds reboot-db-instance --db-instance-identifier instance-2020-12-22-5208
{
  "DBInstanceIdentifier": "instance-2020-12-22-5208",
  "DBInstanceStatus": "rebooting"
}

$ aws rds describe-db-clusters --db-cluster-identifier tpch100g \
  --query '*[].[DBClusterParameterGroup]' --output text
lab-mode-enabled-57

$ aws rds describe-db-cluster-parameters \
  --db-cluster-parameter-group-name lab-mode-enabled-57 \
    --query '*[*].{ParameterName:ParameterName,ParameterValue:ParameterValue}' \
      --output text | grep aurora_lab_mode
aurora_lab_mode 1
```

 The following example shows the remaining steps after the parameter group change takes effect. It tests the `aurora_lab_mode` setting to make sure that the cluster can use Fast DDL. Then it runs `ALTER TABLE` statements to add columns to the end of another large table. This time, the statements finish very quickly. 

```
mysql> select @@aurora_lab_mode;
+-------------------+
| @@aurora_lab_mode |
+-------------------+
|                 1 |
+-------------------+

mysql> ALTER TABLE orders_fast_ddl ADD COLUMN o_refunded boolean;
Query OK, 0 rows affected (1.51 sec)

mysql> ALTER TABLE orders_fast_ddl ADD COLUMN o_coverletter varchar(512);
Query OK, 0 rows affected (0.40 sec)
```

# Displaying volume status for an Aurora MySQL DB cluster
<a name="AuroraMySQL.Managing.VolumeStatus"></a>

In Amazon Aurora, a DB cluster volume consists of a collection of logical blocks. Each of these represents 10 gigabytes of allocated storage. These blocks are called *protection groups*.

The data in each protection group is replicated across six physical storage devices, called *storage nodes*. These storage nodes are allocated across three Availability Zones (AZs) in the AWS Region where the DB cluster resides. In turn, each storage node contains one or more logical blocks of data for the DB cluster volume. For more information about protection groups and storage nodes, see [Introducing the Aurora storage engine](https://aws.amazon.com/blogs/database/introducing-the-aurora-storage-engine/) on the AWS Database Blog.

You can simulate the failure of an entire storage node, or a single logical block of data within a storage node. To do so, you use the `ALTER SYSTEM SIMULATE DISK FAILURE` fault injection statement. For the statement, you specify the index value of a specific logical block of data or storage node. However, if you specify an index value greater than the number of logical blocks of data or storage nodes used by the DB cluster volume, the statement returns an error. For more information about fault injection queries, see [Testing Amazon Aurora MySQL using fault injection queries](AuroraMySQL.Managing.FaultInjectionQueries.md).

You can avoid that error by using the `SHOW VOLUME STATUS` statement. The statement returns two server status variables, `Disks` and `Nodes`. These variables represent the total number of logical blocks of data and storage nodes, respectively, for the DB cluster volume.

## Syntax
<a name="AuroraMySQL.Managing.VolumeStatus.Syntax"></a>

```
SHOW VOLUME STATUS
```

## Example
<a name="AuroraMySQL.Managing.VolumeStatus.Example"></a>

The following example illustrates a typical SHOW VOLUME STATUS result.

```
mysql> SHOW VOLUME STATUS;
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Disks         | 96    |
| Nodes         | 74    |
+---------------+-------+
```