

# Migrating data to an Amazon Aurora MySQL DB cluster
<a name="AuroraMySQL.Migrating"></a>

You have several options for migrating data from your existing database to an Amazon Aurora MySQL DB cluster. Your migration options also depend on the database that you are migrating from and the size of the data that you are migrating.

There are two different types of migration: physical and logical. Physical migration means that physical copies of database files are used to migrate the database. Logical migration means that the migration is accomplished by applying logical database changes, such as inserts, updates, and deletes.

Physical migration has the following advantages:
+ Physical migration is faster than logical migration, especially for large databases.
+ Database performance does not suffer when a backup is taken for physical migration.
+ Physical migration can migrate everything in the source database, including complex database components.

Physical migration has the following limitations:
+ The `innodb_page_size` parameter must be set to its default value (`16KB`).
+ The `innodb_data_file_path` parameter must be configured with only one data file that uses the default data file name `"ibdata1:12M:autoextend"`. Databases with two data files, or with a data file with a different name, can't be migrated using this method.

  The following are examples of file names that are not allowed: `"innodb_data_file_path=ibdata1:50M; ibdata2:50M:autoextend"` and `"innodb_data_file_path=ibdata01:50M:autoextend"`.
+ The `innodb_log_files_in_group` parameter must be set to its default value (`2`).

Logical migration has the following advantages:
+ You can migrate subsets of the database, such as specific tables or parts of a table.
+ The data can be migrated regardless of the physical storage structure.

Logical migration has the following limitations:
+ Logical migration is usually slower than physical migration.
+ Complex database components can slow down the logical migration process. In some cases, complex database components can even block logical migration.

The following table describes your options and the type of migration for each option.


| Migrating from | Migration type | Solution | 
| --- | --- | --- | 
| An RDS for MySQL DB instance | Physical |  You can migrate from an RDS for MySQL DB instance by first creating an Aurora MySQL read replica of a MySQL DB instance. When the replica lag between the MySQL DB instance and the Aurora MySQL read replica is 0, you can direct your client applications to read from the Aurora read replica and then stop replication to make the Aurora MySQL read replica a standalone Aurora MySQL DB cluster for reading and writing. For details, see [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md).  | 
| An RDS for MySQL DB snapshot | Physical |  You can migrate data directly from an RDS for MySQL DB snapshot to an Amazon Aurora MySQL DB cluster. For details, see [Migrating an RDS for MySQL snapshot to Aurora](AuroraMySQL.Migrating.RDSMySQL.Snapshot.md).  | 
| A MySQL database external to Amazon RDS | Logical |  You can create a dump of your data using the `mysqldump` utility, and then import that data into an existing Amazon Aurora MySQL DB cluster. For details, see [Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump](AuroraMySQL.Migrating.ExtMySQL.mysqldump.md). To export metadata for database users during the migration from an external MySQL database, you can also use a MySQL Shell command instead of `mysqldump`. For more information, see [Instance Dump Utility, Schema Dump Utility, and Table Dump Utility](https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-dump-instance-schema.html#mysql-shell-utilities-dump-about).  The [mysqlpump](https://dev.mysql.com/doc/refman/8.0/en/mysqlpump.html) utility is deprecated as of MySQL 8.0.34.   | 
| A MySQL database external to Amazon RDS | Physical |  You can copy the backup files from your database to an Amazon Simple Storage Service (Amazon S3) bucket, and then restore an Amazon Aurora MySQL DB cluster from those files. This option can be considerably faster than migrating data using `mysqldump`. For details, see [Physical migration from MySQL by using Percona XtraBackup and Amazon S3](AuroraMySQL.Migrating.ExtMySQL.S3.md).  | 
| A MySQL database external to Amazon RDS | Logical |  You can save data from your database as text files and copy those files to an Amazon S3 bucket. You can then load that data into an existing Aurora MySQL DB cluster using the `LOAD DATA FROM S3` MySQL command. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).  | 
| A database that isn't MySQL-compatible | Logical |  You can use AWS Database Migration Service (AWS DMS) to migrate data from a database that isn't MySQL-compatible. For more information on AWS DMS, see [What is AWS database migration service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) | 

**Note**  
If you're migrating a MySQL database external to Amazon RDS, the migration options described in the table are supported only if your database supports the InnoDB or MyISAM tablespaces.  
If the MySQL database you're migrating to Aurora MySQL uses `memcached`, remove `memcached` before migrating it.  
You can't migrate to Aurora MySQL version 3.05 and higher from some older MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to MySQL version 8.0.28 before migrating.

# Migrating data from an external MySQL database to an Amazon Aurora MySQL DB cluster
<a name="AuroraMySQL.Migrating.ExtMySQL"></a>

If your database supports the InnoDB or MyISAM tablespaces, you have these options for migrating your data to an Amazon Aurora MySQL DB cluster: 
+ You can create a dump of your data using the `mysqldump` utility, and then import that data into an existing Amazon Aurora MySQL DB cluster. For more information, see [Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump](AuroraMySQL.Migrating.ExtMySQL.mysqldump.md).
+ You can copy the full and incremental backup files from your database to an Amazon S3 bucket, and then restore to an Amazon Aurora MySQL DB cluster from those files. This option can be considerably faster than migrating data using `mysqldump`. For more information, see [Physical migration from MySQL by using Percona XtraBackup and Amazon S3](AuroraMySQL.Migrating.ExtMySQL.S3.md).

**Topics**
+ [

# Physical migration from MySQL by using Percona XtraBackup and Amazon S3
](AuroraMySQL.Migrating.ExtMySQL.S3.md)
+ [

# Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump
](AuroraMySQL.Migrating.ExtMySQL.mysqldump.md)

# Physical migration from MySQL by using Percona XtraBackup and Amazon S3
<a name="AuroraMySQL.Migrating.ExtMySQL.S3"></a>

You can copy the full and incremental backup files from your source MySQL version 5.7 or 8.0 database to an Amazon S3 bucket. Then you can restore to an Amazon Aurora MySQL DB cluster with the same major DB engine version from those files.

This option can be considerably faster than migrating data using `mysqldump`, because using `mysqldump` replays all of the commands to recreate the schema and data from your source database in your new Aurora MySQL DB cluster. By copying your source MySQL data files, Aurora MySQL can immediately use those files as the data for an Aurora MySQL DB cluster.

You can also minimize downtime by using binary log replication during the migration process. If you use binary log replication, the external MySQL database remains open to transactions while the data is being migrated to the Aurora MySQL DB cluster. After the Aurora MySQL DB cluster has been created, you use binary log replication to synchronize the Aurora MySQL DB cluster with the transactions that happened after the backup. When the Aurora MySQL DB cluster is caught up with the MySQL database, you finish the migration by completely switching to the Aurora MySQL DB cluster for new transactions. For more information, see [Synchronizing the Amazon Aurora MySQL DB cluster with the MySQL database using replication](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync).

**Contents**
+ [

## Limitations and considerations
](#AuroraMySQL.Migrating.ExtMySQL.S3.Limits)
+ [

## Before you begin
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs)
  + [

### Installing Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.XtraBackup)
  + [

### Required permissions
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.Permitting)
  + [

### Creating the IAM service role
](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.CreateRole)
+ [

## Backing up files to be restored as an Amazon Aurora MySQL DB cluster
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup)
  + [

### Creating a full backup with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Full)
  + [

### Using incremental backups with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Incr)
  + [

### Backup considerations
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Considerations)
+ [

## Restoring an Amazon Aurora MySQL DB cluster from an Amazon S3 bucket
](#AuroraMySQL.Migrating.ExtMySQL.S3.Restore)
+ [

## Synchronizing the Amazon Aurora MySQL DB cluster with the MySQL database using replication
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync)
  + [

### Configuring your external MySQL database and your Aurora MySQL DB cluster for encrypted replication
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.ConfigureEncryption)
  + [

### Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.Synchronizing)
+ [

# Reducing the time for physical migration to Amazon Aurora MySQL
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md)
  + [

## Unsupported table types
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Tables)
  + [

## User accounts with unsupported privileges
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Users)
  + [

## Dynamic privileges in Aurora MySQL version 3
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Dynamic)
  + [

## Stored objects with 'rdsadmin'@'localhost' as the definer
](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Objects)

## Limitations and considerations
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Limits"></a>

The following limitations and considerations apply to restoring to an Amazon Aurora MySQL DB cluster from an Amazon S3 bucket:
+ You can migrate your data only to a new DB cluster, not an existing DB cluster.
+ You must use Percona XtraBackup to back up your data to S3. For more information, see [Installing Percona XtraBackup](#AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.XtraBackup).
+ The Amazon S3 bucket and the Aurora MySQL DB cluster must be in the same AWS Region.
+ You can't restore from the following:
  + A DB cluster snapshot export to Amazon S3. You also can't migrate data from a DB cluster snapshot export to your S3 bucket.
  + An encrypted source database, but you can encrypt the data being migrated. You can also leave the data unencrypted during the migration process.
  + A MySQL 5.5 or 5.6 database
+ Percona Server for MySQL isn't supported as a source database, because it can contain `compression_dictionary*` tables in the `mysql` schema.
+ You can't restore to an Aurora Serverless DB cluster.
+ Backward migration isn't supported for either major versions or minor versions. For example, you can't migrate from MySQL version 8.0 to Aurora MySQL version 2 (compatible with MySQL 5.7), and you can't migrate from MySQL version 8.0.32 to Aurora MySQL version 3.03, which is compatible with MySQL community version 8.0.26.
+ You can't migrate to Aurora MySQL version 3.05 and higher from some older MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to MySQL version 8.0.28 before migrating.
+ Importing from Amazon S3 isn't supported on the db.t2.micro DB instance class. However, you can restore to a different DB instance class, and change the DB instance class later. For more information about DB instance classes, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).
+ Amazon S3 limits the size of a file uploaded to an S3 bucket to 5 TB. If a backup file exceeds 5 TB, then you must split the backup file into smaller files.
+ Amazon RDS limits the number of files uploaded to an S3 bucket to 1 million. If the backup data for your database, including all full and incremental backups, exceeds 1 million files, use a Gzip (.gz), tar (.tar.gz), or Percona xbstream (.xbstream) file to store full and incremental backup files in the S3 bucket. Percona XtraBackup 8.0 only supports Percona xbstream for compression.
+ To provide management services for each DB cluster, the `rdsadmin` user is created when the DB cluster is created. As this is a reserved user in RDS, the following limitations apply:
  + Functions, procedures, views, events, and triggers with the `'rdsadmin'@'localhost'` definer aren't imported. For more information, see [Stored objects with 'rdsadmin'@'localhost' as the definer](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Objects) and [Master user privileges with Amazon Aurora MySQL](AuroraMySQL.Security.md#AuroraMySQL.Security.MasterUser).
  + When the Aurora MySQL DB cluster is created, a master user is created with the maximum privileges supported. While restoring from backup, any unsupported privileges assigned to users being imported are removed automatically during import.

    To identify users that might be affected by this, see [User accounts with unsupported privileges](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Users). For more information on supported privileges in Aurora MySQL, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).
+ For Aurora MySQL version 3, dynamic privileges aren't imported. Aurora-supported dynamic privileges can be imported after migration. For more information, see [Dynamic privileges in Aurora MySQL version 3](AuroraMySQL.Migrating.ExtMySQL.Prechecks.md#AuroraMySQL.Migrating.ExtMySQL.Prechecks.Dynamic).
+ User-created tables in the `mysql` schema aren't migrated.
+ The `innodb_data_file_path` parameter must be configured with only one data file that uses the default data file name `ibdata1:12M:autoextend`. Databases with two data files, or with a data file with a different name, can't be migrated using this method.

  The following are examples of file names that aren't allowed: `innodb_data_file_path=ibdata1:50M`, `ibdata2:50M:autoextend`, and `innodb_data_file_path=ibdata01:50M:autoextend`.
+ You can't migrate from a source database that has tables defined outside of the default MySQL data directory.
+ The maximum supported size for uncompressed backups using this method is currently limited to 64 TiB. For compressed backups, this limit goes lower to account for the uncompression space requirements. In such cases, the maximum supported backup size would be (`64 TiB – compressed backup size`).
+ Aurora MySQL doesn't support the importing of MySQL and other external components and plugins.
+ Aurora MySQL doesn't restore everything from your database. We recommend that you save the database schema and values for the following items from your source MySQL database, then add them to your restored Aurora MySQL DB cluster after it has been created:
  + User accounts
  + Functions
  + Stored procedures
  + Time zone information. Time zone information is loaded from the local operating system of your Aurora MySQL DB cluster. For more information, see [Local time zone for Amazon Aurora DB clusters](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.LocalTimeZone).

## Before you begin
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs"></a>

Before you can copy your data to an Amazon S3 bucket and restore to a DB cluster from those files, you must do the following:
+ Install Percona XtraBackup on your local server.
+ Permit Aurora MySQL to access your Amazon S3 bucket on your behalf.

### Installing Percona XtraBackup
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.XtraBackup"></a>

Amazon Aurora can restore a DB cluster from files that were created using Percona XtraBackup. You can install Percona XtraBackup from [Software Downloads - Percona](https://www.percona.com/downloads).

For MySQL 5.7 migration, use Percona XtraBackup 2.4.

For MySQL 8.0 migration, use Percona XtraBackup 8.0. Make sure that the Percona XtraBackup version is compatible with the engine version of your source database.

### Required permissions
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.Permitting"></a>

To migrate your MySQL data to an Amazon Aurora MySQL DB cluster, several permissions are required:
+ The user that is requesting that Aurora create a new cluster from an Amazon S3 bucket must have permission to list the buckets for your AWS account. You grant the user this permission using an AWS Identity and Access Management (IAM) policy.
+ Aurora requires permission to act on your behalf to access the Amazon S3 bucket where you store the files used to create your Amazon Aurora MySQL DB cluster. You grant Aurora the required permissions using an IAM service role. 
+ The user making the request must also have permission to list the IAM roles for your AWS account.
+ If the user making the request is to create the IAM service role or request that Aurora create the IAM service role (by using the console), then the user must have permission to create an IAM role for your AWS account.
+ If you plan to encrypt the data during the migration process, update the IAM policy of the user who will perform the migration to grant RDS access to the AWS KMS keys used for encrypting the backups. For instructions, see [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md).

For example, the following IAM policy grants a user the minimum required permissions to use the console to list IAM roles, create an IAM role, list the Amazon S3 buckets for your account, and list the KMS keys.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListRoles",
                "iam:CreateRole",
                "iam:CreatePolicy",
                "iam:AttachRolePolicy",
                "s3:ListBucket",
                "kms:ListKeys"
            ],
            "Resource": "*"
        }
    ]
}
```

------

Additionally, for a user to associate an IAM role with an Amazon S3 bucket, the IAM user must have the `iam:PassRole` permission for that IAM role. This permission allows an administrator to restrict which IAM roles a user can associate with Amazon S3 buckets. 

For example, the following IAM policy allows a user to associate the role named `S3Access` with an Amazon S3 bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Sid":"AllowS3AccessRole",
            "Effect":"Allow",
            "Action":"iam:PassRole",
            "Resource":"arn:aws:iam::123456789012:role/S3Access"
        }
    ]
}
```

------

For more information on IAM user permissions, see [Managing access using policies](UsingWithRDS.IAM.md#security_iam_access-manage).

### Creating the IAM service role
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Prereqs.CreateRole"></a>

You can have the AWS Management Console create a role for you by choosing the **Create a New Role** option (shown later in this topic). If you select this option and specify a name for the new role, then Aurora creates the IAM service role required for Aurora to access your Amazon S3 bucket with the name that you supply.

As an alternative, you can manually create the role using the following procedure.

**To create an IAM role for Aurora to access Amazon S3**

1. Complete the steps in [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md).

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Complete the steps in [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

## Backing up files to be restored as an Amazon Aurora MySQL DB cluster
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Backup"></a>

You can create a full backup of your MySQL database files using Percona XtraBackup and upload the backup files to an Amazon S3 bucket. Alternatively, if you already use Percona XtraBackup to back up your MySQL database files, you can upload your existing full and incremental backup directories and files to an Amazon S3 bucket.

**Topics**
+ [

### Creating a full backup with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Full)
+ [

### Using incremental backups with Percona XtraBackup
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Incr)
+ [

### Backup considerations
](#AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Considerations)

### Creating a full backup with Percona XtraBackup
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Full"></a>

To create a full backup of your MySQL database files that can be restored from Amazon S3 to create an Aurora MySQL DB cluster, use the Percona XtraBackup utility (`xtrabackup`) to back up your database. 

For example, the following command creates a backup of a MySQL database and stores the files in the `/on-premises/s3-restore/backup` folder.

```
xtrabackup --backup --user=<myuser> --password=<password> --target-dir=</on-premises/s3-restore/backup>
```

If you want to compress your backup into a single file (which can be split, if needed), you can use the `--stream` option to save your backup in one of the following formats:
+ Gzip (.gz)
+ tar (.tar)
+ Percona xbstream (.xbstream)

The following command creates a backup of your MySQL database split into multiple Gzip files.

```
xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \
   --target-dir=</on-premises/s3-restore/backup> | gzip - | split -d --bytes=500MB \
   - </on-premises/s3-restore/backup/backup>.tar.gz
```

The following command creates a backup of your MySQL database split into multiple tar files.

```
xtrabackup --backup --user=<myuser> --password=<password> --stream=tar \
   --target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \
   - </on-premises/s3-restore/backup/backup>.tar
```

The following command creates a backup of your MySQL database split into multiple xbstream files.

```
xtrabackup --backup --user=<myuser> --password=<password> --stream=xbstream \
   --target-dir=</on-premises/s3-restore/backup> | split -d --bytes=500MB \
   - </on-premises/s3-restore/backup/backup>.xbstream
```

**Note**  
If you see the following error, it might be caused by mixing file formats in your command:  

```
ERROR:/bin/tar: This does not look like a tar archive
```

Once you have backed up your MySQL database using the Percona XtraBackup utility, you can copy your backup directories and files to an Amazon S3 bucket.

For information on creating and uploading a file to an Amazon S3 bucket, see [Getting started with Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html) in the *Amazon S3 Getting Started Guide*.

### Using incremental backups with Percona XtraBackup
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Incr"></a>

Amazon Aurora MySQL supports both full and incremental backups created using Percona XtraBackup. If you already use Percona XtraBackup to perform full and incremental backups of your MySQL database files, you don't need to create a full backup and upload the backup files to Amazon S3. Instead, you can save a significant amount of time by copying your existing backup directories and files for your full and incremental backups to an Amazon S3 bucket. For more information, see [Create an incremental backup](https://docs.percona.com/percona-xtrabackup/8.0/create-incremental-backup.html) on the Percona website.

When copying your existing full and incremental backup files to an Amazon S3 bucket, you must recursively copy the contents of the base directory. Those contents include the full backup and also all incremental backup directories and files. This copy must preserve the directory structure in the Amazon S3 bucket. Aurora iterates through all files and directories. Aurora uses the `xtrabackup-checkpoints` file included with each incremental backup to identify the base directory and to order incremental backups by log sequence number (LSN) range.

For information on creating and uploading a file to an Amazon S3 bucket, see [Getting started with Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html) in the *Amazon S3 Getting Started Guide*.

### Backup considerations
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Backup.Considerations"></a>

Aurora doesn't support partial backups created using Percona XtraBackup. You can't use the following options to create a partial backup when you back up the source files for your database: `--tables`, `--tables-exclude`, `--tables-file`, `--databases`, `--databases-exclude`, or `--databases-file`.

For more information about backing up your database with Percona XtraBackup, see [Percona XtraBackup - Documentation](https://www.percona.com/doc/percona-xtrabackup/LATEST/index.html) and [Work with binary logs](https://docs.percona.com/percona-xtrabackup/8.0/working-with-binary-logs.html) on the Percona website.

Aurora supports incremental backups created using Percona XtraBackup. For more information, see [Create an incremental backup](https://docs.percona.com/percona-xtrabackup/8.0/create-incremental-backup.html) on the Percona website.

Aurora consumes your backup files based on the file name. Be sure to name your backup files with the appropriate file extension based on the file format—for example, `.xbstream` for files stored using the Percona xbstream format.

Aurora consumes your backup files in alphabetical order and also in natural number order. Always use the `split` option when you issue the `xtrabackup` command to ensure that your backup files are written and named in the proper order.

Amazon S3 limits the size of a file uploaded to an Amazon S3 bucket to 5 TB. If the backup data for your database exceeds 5 TB, use the `split` command to split the backup files into multiple files that are each less than 5 TB.

Aurora limits the number of source files uploaded to an Amazon S3 bucket to 1 million files. In some cases, backup data for your database, including all full and incremental backups, can come to a large number of files. In these cases, use a tarball (.tar.gz) file to store full and incremental backup files in the Amazon S3 bucket.

When you upload a file to an Amazon S3 bucket, you can use server-side encryption to encrypt the data. You can then restore an Amazon Aurora MySQL DB cluster from those encrypted files. Amazon Aurora MySQL can restore a DB cluster with files encrypted using the following types of server-side encryption:
+ Server-side encryption with Amazon S3–managed keys (SSE-S3) – Each object is encrypted with a unique key employing strong multifactor encryption.
+ Server-side encryption with AWS KMS–managed keys (SSE-KMS) – Similar to SSE-S3, but you have the option to create and manage encryption keys yourself, and also other differences.

For information about using server-side encryption when uploading files to an Amazon S3 bucket, see [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) in the *Amazon S3 Developer Guide*.

## Restoring an Amazon Aurora MySQL DB cluster from an Amazon S3 bucket
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.Restore"></a>

You can restore your backup files from your Amazon S3 bucket to create a new Amazon Aurora MySQL DB cluster by using the Amazon RDS console. 

**To restore an Amazon Aurora MySQL DB cluster from files on an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the top right corner of the Amazon RDS console, choose the AWS Region in which to create your DB cluster. Choose the same AWS Region as the Amazon S3 bucket that contains your database backup. 

1. In the navigation pane, choose **Databases**, and then choose **Restore from S3**.

1. Choose **Restore from S3**.

   The **Create database by restoring from S3** page appears.  
![\[The page where you specify the details for restoring a DB cluster from S3\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraMigrateS3_01.png)

1. Under **S3 destination**:

   1. Choose the **S3 bucket** that contains the backup files.

   1. (Optional) For **S3 folder path prefix**, enter a file path prefix for the files stored in your Amazon S3 bucket.

      If you don't specify a prefix, then RDS creates your DB instance using all of the files and folders in the root folder of the S3 bucket. If you do specify a prefix, then RDS creates your DB instance using the files and folders in the S3 bucket where the path for the file begins with the specified prefix.

      For example, suppose that you store your backup files on S3 in a subfolder named backups, and you have multiple sets of backup files, each in its own directory (gzip\$1backup1, gzip\$1backup2, and so on). In this case, you specify a prefix of backups/gzip\$1backup1 to restore from the files in the gzip\$1backup1 folder. 

1. Under **Engine options**:

   1. For **Engine type**, choose **Amazon Aurora**.

   1. For **Version**, choose the Aurora MySQL engine version for your restored DB instance.

1. For **IAM role**, you can choose an existing IAM role.

1. (Optional) You can also have a new IAM role created for you by choosing **Create a new role**. If so:

   1. Enter the **IAM role name**.

   1.  Choose whether to **Allow access to KMS key**:
      + If you didn't encrypt the backup files, choose **No**.
      + If you encrypted the backup files with AES-256 (SSE-S3) when you uploaded them to Amazon S3, choose **No**. In this case, the data is decrypted automatically.
      + If you encrypted the backup files with AWS KMS (SSE-KMS) server-side encryption when you uploaded them to Amazon S3, choose **Yes**. Next, choose the correct KMS key for **AWS KMS key**.

        The AWS Management Console creates an IAM policy that enables Aurora to decrypt the data.

      For more information, see [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) in the *Amazon S3 Developer Guide*.

1. Choose settings for your DB cluster, such as the DB cluster storage configuration, DB instance class, DB cluster identifier, and login credentials. For information about each setting, see [Settings for Aurora DB clusters](Aurora.CreateInstance.md#Aurora.CreateInstance.Settings).

1. Customize additional settings for your Aurora MySQL DB cluster as needed.

1. Choose **Create database** to launch your Aurora DB instance.

On the Amazon RDS console, the new DB instance appears in the list of DB instances. The DB instance has a status of **creating** until the DB instance is created and ready for use. When the state changes to **available**, you can connect to the primary instance for your DB cluster. Depending on the DB instance class and store allocated, it can take several minutes for the new instance to be available.

To view the newly created cluster, choose the **Databases** view in the Amazon RDS console and choose the DB cluster. For more information, see [Viewing an Amazon Aurora DB cluster](accessing-monitoring.md#Aurora.Viewing).

![\[Amazon Aurora DB Instances List\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraLaunch04.png)


Note the port and the writer endpoint of the DB cluster. Use the writer endpoint and port of the DB cluster in your JDBC and ODBC connection strings for any application that performs write or read operations.

## Synchronizing the Amazon Aurora MySQL DB cluster with the MySQL database using replication
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.RepSync"></a>

To achieve little or no downtime during the migration, you can replicate transactions that were committed on your MySQL database to your Aurora MySQL DB cluster. Replication enables the DB cluster to catch up with the transactions on the MySQL database that happened during the migration. When the DB cluster is completely caught up, you can stop the replication and finish the migration to Aurora MySQL.

**Topics**
+ [

### Configuring your external MySQL database and your Aurora MySQL DB cluster for encrypted replication
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.ConfigureEncryption)
+ [

### Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database
](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.Synchronizing)

### Configuring your external MySQL database and your Aurora MySQL DB cluster for encrypted replication
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.ConfigureEncryption"></a>

To replicate data securely, you can use encrypted replication.

**Note**  
If you don't need to use encrypted replication, you can skip these steps and move on to the instructions in [Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database](#AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.Synchronizing).

The following are prerequisites for using encrypted replication:
+ Secure Sockets Layer (SSL) must be enabled on the external MySQL primary database.
+ A client key and client certificate must be prepared for the Aurora MySQL DB cluster.

During encrypted replication, the Aurora MySQL DB cluster acts a client to the MySQL database server. The certificates and keys for the Aurora MySQL client are in files in .pem format.

**To configure your external MySQL database and your Aurora MySQL DB cluster for encrypted replication**

1. Ensure that you are prepared for encrypted replication:
   + If you don't have SSL enabled on the external MySQL primary database and don't have a client key and client certificate prepared, enable SSL on the MySQL database server and generate the required client key and client certificate.
   + If SSL is enabled on the external primary, supply a client key and certificate for the Aurora MySQL DB cluster. If you don't have these, generate a new key and certificate for the Aurora MySQL DB cluster. To sign the client certificate, you must have the certificate authority key that you used to configure SSL on the external MySQL primary database.

   For more information, see [ Creating SSL certificates and keys using openssl](https://dev.mysql.com/doc/refman/5.6/en/creating-ssl-files-using-openssl.html) in the MySQL documentation.

   You need the certificate authority certificate, the client key, and the client certificate.

1. Connect to the Aurora MySQL DB cluster as the primary user using SSL.

   For information about connecting to an Aurora MySQL DB cluster with SSL, see [TLS connections to Aurora MySQL DB clusters](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL).

1. Run the [mysql.rds\$1import\$1binlog\$1ssl\$1material](mysql-stored-proc-replicating.md#mysql_rds_import_binlog_ssl_material) stored procedure to import the SSL information into the Aurora MySQL DB cluster.

   For the `ssl_material_value` parameter, insert the information from the .pem format files for the Aurora MySQL DB cluster in the correct JSON payload.

   The following example imports SSL information into an Aurora MySQL DB cluster. In .pem format files, the body code typically is longer than the body code shown in the example.

   ```
   call mysql.rds_import_binlog_ssl_material(
   '{"ssl_ca":"-----BEGIN CERTIFICATE-----
   AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
   hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
   lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
   qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
   BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
   -----END CERTIFICATE-----\n","ssl_cert":"-----BEGIN CERTIFICATE-----
   AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
   hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
   lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
   qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
   BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
   -----END CERTIFICATE-----\n","ssl_key":"-----BEGIN RSA PRIVATE KEY-----
   AAAAB3NzaC1yc2EAAAADAQABAAABAQClKsfkNkuSevGj3eYhCe53pcjqP3maAhDFcvBS7O6V
   hz2ItxCih+PnDSUaw+WNQn/mZphTk/a/gU8jEzoOWbkM4yxyb/wB96xbiFveSFJuOp/d6RJhJOI0iBXr
   lsLnBItntckiJ7FbtxJMXLvvwJryDUilBMTjYtwB+QhYXUMOzce5Pjz5/i8SeJtjnV3iAoG/cQk+0FzZ
   qaeJAAHco+CY/5WrUBkrHmFJr6HcXkvJdWPkYQS3xqC0+FmUZofz221CBt5IMucxXPkX4rWi+z7wB3Rb
   BQoQzd8v7yeb7OzlPnWOyN0qFU0XA246RA8QFYiCNYwI3f05p6KLxEXAMPLE
   -----END RSA PRIVATE KEY-----\n"}');
   ```

   For more information, see [mysql.rds\$1import\$1binlog\$1ssl\$1material](mysql-stored-proc-replicating.md#mysql_rds_import_binlog_ssl_material) and [TLS connections to Aurora MySQL DB clusters](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL).
**Note**  
After running the procedure, the secrets are stored in files. To erase the files later, you can run the [mysql.rds\$1remove\$1binlog\$1ssl\$1material](mysql-stored-proc-replicating.md#mysql_rds_remove_binlog_ssl_material) stored procedure.

### Synchronizing the Amazon Aurora MySQL DB cluster with the external MySQL database
<a name="AuroraMySQL.Migrating.ExtMySQL.S3.RepSync.Synchronizing"></a>

You can synchronize your Amazon Aurora MySQL DB cluster with the MySQL database using replication.

**To synchronize your Aurora MySQL DB cluster with the MySQL database using replication**

1. Ensure that the /etc/my.cnf file for the external MySQL database has the relevant entries.

   If encrypted replication is not required, ensure that the external MySQL database is started with binary logs (binlogs) enabled and SSL disabled. The following are the relevant entries in the /etc/my.cnf file for unencrypted data.

   ```
   log-bin=mysql-bin
   server-id=2133421
   innodb_flush_log_at_trx_commit=1
   sync_binlog=1
   ```

   If encrypted replication is required, ensure that the external MySQL database is started with SSL and binlogs enabled. The entries in the /etc/my.cnf file include the .pem file locations for the MySQL database server.

   ```
   log-bin=mysql-bin
   server-id=2133421
   innodb_flush_log_at_trx_commit=1
   sync_binlog=1
   
   # Setup SSL.
   ssl-ca=/home/sslcerts/ca.pem
   ssl-cert=/home/sslcerts/server-cert.pem
   ssl-key=/home/sslcerts/server-key.pem
   ```

   You can verify that SSL is enabled with the following command.

   ```
   mysql> show variables like 'have_ssl';
   ```

   Your output should be similar the following.

   ```
   +~-~-~-~-~-~-~-~-~-~-~-~-~-~--+~-~-~-~-~-~--+
   | Variable_name | Value |
   +~-~-~-~-~-~-~-~-~-~-~-~-~-~--+~-~-~-~-~-~--+
   | have_ssl      | YES   |
   +~-~-~-~-~-~-~-~-~-~-~-~-~-~--+~-~-~-~-~-~--+
   1 row in set (0.00 sec)
   ```

1. Determine the starting binary log position for replication. You specify the position to start replication in a later step.

   **Using the AWS Management Console**

   1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   1. In the navigation pane, choose **Events**.

   1. In the **Events** list, note the position in the **Recovered from Binary log filename** event.  
![\[View MySQL primary\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-mysql-rep-binary-log-position.png)

   **Using the AWS CLI**

   You can also get the binlog file name and position by using the [describe-events](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-events.html) AWS CLI command. The following shows an example `describe-events` command.

   ```
   PROMPT> aws rds describe-events
   ```

   In the output, identify the event that shows the binlog position.

1. While connected to the external MySQL database, create a user to be used for replication. This account is used solely for replication and must be restricted to your domain to improve security. The following is an example.

   ```
   mysql> CREATE USER '<user_name>'@'<domain_name>' IDENTIFIED BY '<password>';
   ```

   The user requires the `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges. Grant these privileges to the user.

   ```
   GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO '<user_name>'@'<domain_name>';
   ```

   If you need to use encrypted replication, require SSL connections for the replication user. For example, you can use the following statement to require SSL connections on the user account `<user_name>`.

   ```
   GRANT USAGE ON *.* TO '<user_name>'@'<domain_name>' REQUIRE SSL;
   ```
**Note**  
If `REQUIRE SSL` is not included, the replication connection might silently fall back to an unencrypted connection.

1. In the Amazon RDS console, add the IP address of the server that hosts the external MySQL database to the VPC security group for the Aurora MySQL DB cluster. For more information on modifying a VPC security group, see [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. 

   You might also need to configure your local network to permit connections from the IP address of your Aurora MySQL DB cluster, so that it can communicate with your external MySQL database. To find the IP address of the Aurora MySQL DB cluster, use the `host` command.

   ```
   host <db_cluster_endpoint>
   ```

   The host name is the DNS name from the Aurora MySQL DB cluster endpoint.

1. Enable binary log replication by running the [mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_master) or [mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_source) stored procedure. This stored procedure has the following syntax.

   ```
   CALL mysql.rds_set_external_master (
     host_name
     , host_port
     , replication_user_name
     , replication_user_password
     , mysql_binary_log_file_name
     , mysql_binary_log_file_location
     , ssl_encryption
   );
   
   CALL mysql.rds_set_external_source (
     host_name
     , host_port
     , replication_user_name
     , replication_user_password
     , mysql_binary_log_file_name
     , mysql_binary_log_file_location
     , ssl_encryption
   );
   ```

   For information about the parameters, see [mysql.rds\$1reset\$1external\$1master (Aurora MySQL version 2)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_master) and [mysql.rds\$1reset\$1external\$1source (Aurora MySQL version 3)](mysql-stored-proc-replicating.md#mysql_rds_reset_external_source).

   For `mysql_binary_log_file_name` and `mysql_binary_log_file_location`, use the position in the **Recovered from Binary log filename** event you noted earlier.

   If the data in the Aurora MySQL DB cluster is not encrypted, the `ssl_encryption` parameter must be set to `0`. If the data is encrypted, the `ssl_encryption` parameter must be set to `1`.

   The following example runs the procedure for an Aurora MySQL DB cluster that has encrypted data.

   ```
   CALL mysql.rds_set_external_master(
     'Externaldb.some.com',
     3306,
     'repl_user'@'mydomain.com',
     'password',
     'mysql-bin.000010',
     120,
     1);
   
   CALL mysql.rds_set_external_source(
     'Externaldb.some.com',
     3306,
     'repl_user'@'mydomain.com',
     'password',
     'mysql-bin.000010',
     120,
     1);
   ```

   This stored procedure sets the parameters that the Aurora MySQL DB cluster uses for connecting to the external MySQL database and reading its binary log. If the data is encrypted, it also downloads the SSL certificate authority certificate, client certificate, and client key to the local disk. 

1. Start binary log replication by running the [mysql.rds\$1start\$1replication](mysql-stored-proc-replicating.md#mysql_rds_start_replication) stored procedure.

   ```
   CALL mysql.rds_start_replication;
   ```

1. Monitor how far the Aurora MySQL DB cluster is behind the MySQL replication primary database. To do so, connect to the Aurora MySQL DB cluster and run the following command.

   ```
   Aurora MySQL version 2:
   SHOW SLAVE STATUS;
   
   Aurora MySQL version 3:
   SHOW REPLICA STATUS;
   ```

   In the command output, the `Seconds Behind Master` field shows how far the Aurora MySQL DB cluster is behind the MySQL primary. When this value is `0` (zero), the Aurora MySQL DB cluster has caught up to the primary, and you can move on to the next step to stop replication.

1. Connect to the MySQL replication primary database and stop replication. To do so, run the [mysql.rds\$1stop\$1replication](mysql-stored-proc-replicating.md#mysql_rds_stop_replication) stored procedure.

   ```
   CALL mysql.rds_stop_replication;
   ```

# Reducing the time for physical migration to Amazon Aurora MySQL
<a name="AuroraMySQL.Migrating.ExtMySQL.Prechecks"></a>

You can make the following database modifications to speed up the process of migrating a database to Amazon Aurora MySQL.

**Important**  
Make sure to perform these updates on a copy of a production database, rather than on a production database. You can then back up the copy and restore it to your Aurora MySQL DB cluster to avoid any service interruptions on your production database.

## Unsupported table types
<a name="AuroraMySQL.Migrating.ExtMySQL.Prechecks.Tables"></a>

Aurora MySQL supports only the InnoDB engine for database tables. If you have MyISAM tables in your database, then those tables must be converted before migrating to Aurora MySQL. The conversion process requires additional space for the MyISAM to InnoDB conversion during the migration procedure.

To reduce your chances of running out of space or to speed up the migration process, convert all of your MyISAM tables to InnoDB tables before migrating them. The size of the resulting InnoDB table is equivalent to the size required by Aurora MySQL for that table. To convert a MyISAM table to InnoDB, run the following command:

```
ALTER TABLE schema.table_name engine=innodb, algorithm=copy;
```

Aurora MySQL doesn't support compressed tables or pages, that is, tables created with `ROW_FORMAT=COMPRESSED` or `COMPRESSION = {"zlib"|"lz4"}`.

To reduce your chances of running out of space or to speed up the migration process, expand your compressed tables by setting `ROW_FORMAT` to `DEFAULT`, `COMPACT`, `DYNAMIC`, or `REDUNDANT`. For compressed pages, set `COMPRESSION="none"`.

For more information, see [InnoDB row formats](https://dev.mysql.com/doc/refman/8.0/en/innodb-row-format.html) and [InnoDB table and page compression](https://dev.mysql.com/doc/refman/8.0/en/innodb-compression.html)in the MySQL documentation.

You can use the following SQL script on your existing MySQL DB instance to list the tables in your database that are MyISAM tables or compressed tables.

```
-- This script examines a MySQL database for conditions that block
-- migrating the database into Aurora MySQL.
-- It must be run from an account that has read permission for the
-- INFORMATION_SCHEMA database.

-- Verify that this is a supported version of MySQL.

select msg as `==> Checking current version of MySQL.`
from
  (
  select
    'This script should be run on MySQL version 5.6 or higher. ' +
    'Earlier versions are not supported.' as msg,
    cast(substring_index(version(), '.', 1) as unsigned) * 100 +
      cast(substring_index(substring_index(version(), '.', 2), '.', -1)
      as unsigned)
    as major_minor
  ) as T
where major_minor <> 506;


-- List MyISAM and compressed tables. Include the table size.

select concat(TABLE_SCHEMA, '.', TABLE_NAME) as `==> MyISAM or Compressed Tables`,
round(((data_length + index_length) / 1024 / 1024), 2) "Approx size (MB)"
from INFORMATION_SCHEMA.TABLES
where
  ENGINE <> 'InnoDB'
  and
  (
    -- User tables
    TABLE_SCHEMA not in ('mysql', 'performance_schema',
                         'information_schema')
    or
    -- Non-standard system tables
    (
      TABLE_SCHEMA = 'mysql' and TABLE_NAME not in
        (
          'columns_priv', 'db', 'event', 'func', 'general_log',
          'help_category', 'help_keyword', 'help_relation',
          'help_topic', 'host', 'ndb_binlog_index', 'plugin',
          'proc', 'procs_priv', 'proxies_priv', 'servers', 'slow_log',
          'tables_priv', 'time_zone', 'time_zone_leap_second',
          'time_zone_name', 'time_zone_transition',
          'time_zone_transition_type', 'user'
        )
    )
  )
  or
  (
    -- Compressed tables
       ROW_FORMAT = 'Compressed'
  );
```

## User accounts with unsupported privileges
<a name="AuroraMySQL.Migrating.ExtMySQL.Prechecks.Users"></a>

User accounts with privileges that aren't supported by Aurora MySQL are imported without the unsupported privileges. For the list of supported privileges, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).

You can run the following SQL query on your source database to list the user accounts that have unsupported privileges.

```
SELECT
    user,
    host
FROM
    mysql.user
WHERE
    Shutdown_priv = 'y'
    OR File_priv = 'y'
    OR Super_priv = 'y'
    OR Create_tablespace_priv = 'y';
```

## Dynamic privileges in Aurora MySQL version 3
<a name="AuroraMySQL.Migrating.ExtMySQL.Prechecks.Dynamic"></a>

Dynamic privileges aren't imported. Aurora MySQL version 3 supports the following dynamic privileges.

```
'APPLICATION_PASSWORD_ADMIN',
'CONNECTION_ADMIN',
'REPLICATION_APPLIER',
'ROLE_ADMIN',
'SESSION_VARIABLES_ADMIN',
'SET_USER_ID',
'XA_RECOVER_ADMIN'
```

The following example script grants the supported dynamic privileges to the user accounts in the Aurora MySQL DB cluster.

```
-- This script finds the user accounts that have Aurora MySQL supported dynamic privileges 
-- and grants them to corresponding user accounts in the Aurora MySQL DB cluster.

/home/ec2-user/opt/mysql/8.0.26/bin/mysql -uusername -pxxxxx -P8026 -h127.0.0.1 -BNe "SELECT
  CONCAT('GRANT ', GRANTS, ' ON *.* TO ', GRANTEE ,';') AS grant_statement
  FROM (select GRANTEE, group_concat(privilege_type) AS GRANTS FROM information_schema.user_privileges 
      WHERE privilege_type IN (
        'APPLICATION_PASSWORD_ADMIN',
        'CONNECTION_ADMIN',
        'REPLICATION_APPLIER',
        'ROLE_ADMIN',
        'SESSION_VARIABLES_ADMIN',
        'SET_USER_ID',
        'XA_RECOVER_ADMIN')
      AND GRANTEE NOT IN (\"'mysql.session'@'localhost'\",\"'mysql.infoschema'@'localhost'\",\"'mysql.sys'@'localhost'\") GROUP BY GRANTEE)
      AS PRIVGRANTS; " | /home/ec2-user/opt/mysql/8.0.26/bin/mysql -u master_username -p master_password -h DB_cluster_endpoint
```

## Stored objects with 'rdsadmin'@'localhost' as the definer
<a name="AuroraMySQL.Migrating.ExtMySQL.Prechecks.Objects"></a>

Functions, procedures, views, events, and triggers with `'rdsadmin'@'localhost'` as the definer aren't imported.

You can use the following SQL script on your source MySQL database to list the stored objects that have the unsupported definer.

```
-- This SQL query lists routines with `rdsadmin`@`localhost` as the definer.

SELECT
    ROUTINE_SCHEMA,
    ROUTINE_NAME
FROM
    information_schema.routines
WHERE
    definer = 'rdsadmin@localhost';

-- This SQL query lists triggers with `rdsadmin`@`localhost` as the definer.

SELECT
    TRIGGER_SCHEMA,
    TRIGGER_NAME,
    DEFINER
FROM
    information_schema.triggers
WHERE
    DEFINER = 'rdsadmin@localhost';

-- This SQL query lists events with `rdsadmin`@`localhost` as the definer.

SELECT
    EVENT_SCHEMA,
    EVENT_NAME
FROM
    information_schema.events
WHERE
    DEFINER = 'rdsadmin@localhost';

-- This SQL query lists views with `rdsadmin`@`localhost` as the definer.
SELECT
    TABLE_SCHEMA,
    TABLE_NAME
FROM
    information_schema.views
WHERE
    DEFINER = 'rdsadmin@localhost';
```

# Logical migration from MySQL to Amazon Aurora MySQL by using mysqldump
<a name="AuroraMySQL.Migrating.ExtMySQL.mysqldump"></a>

Because Amazon Aurora MySQL is a MySQL-compatible database, you can use the `mysqldump` utility to copy data from your MySQL database or the `mariadb-dump` utility to copy your data from your MariaDB database to an existing Aurora MySQL DB cluster.

For a discussion of how to do so with MySQL or MariaDB databases that are very large, see the following topics in the *Amazon Relational Database Service User Guide*:
+ MySQL – [Importing data to an Amazon RDS for MySQL database with reduced downtime](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-importing-data-reduced-downtime.html)
+ MariaDB – [Importing data to an Amazon RDS for MariaDB database with reduced downtime](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mariadb-importing-data-reduced-downtime.html)

For MySQL or MariaDB databases that have smaller amounts of data, see the following topics in the *Amazon Relational Database Service User Guide*:
+ MySQL – [Importing data from an external MySQL database to an Amazon RDS for MySQL DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql-importing-data-external-database.html)
+ MariaDB – [Importing data from an external MariaDB database to an Amazon RDS for MariaDB DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mariadb-importing-data-external-database.html)

# Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster
<a name="AuroraMySQL.Migrating.RDSMySQL"></a>

You can migrate (copy) data to an Amazon Aurora MySQL DB cluster from an RDS for MySQL DB instance.

**Topics**
+ [

# Migrating an RDS for MySQL snapshot to Aurora
](AuroraMySQL.Migrating.RDSMySQL.Snapshot.md)
+ [

# Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica
](AuroraMySQL.Migrating.RDSMySQL.Replica.md)

**Note**  
Because Amazon Aurora MySQL is compatible with MySQL, you can migrate data from your MySQL database by setting up replication between your MySQL database and an Amazon Aurora MySQL DB cluster. For more information, see [Replication with Amazon Aurora](Aurora.Replication.md).

# Migrating an RDS for MySQL snapshot to Aurora
<a name="AuroraMySQL.Migrating.RDSMySQL.Snapshot"></a>

You can migrate a DB snapshot of an RDS for MySQL DB instance to create an Aurora MySQL DB cluster. The new Aurora MySQL DB cluster is populated with the data from the original RDS for MySQL DB instance. The DB snapshot must have been made from an Amazon RDS DB instance running a MySQL version that's compatible with Aurora MySQL.

You can migrate either a manual or automated DB snapshot. After the DB cluster is created, you can then create optional Aurora Replicas.

**Note**  
You can also migrate an RDS for MySQL DB instance to an Aurora MySQL DB cluster by creating an Aurora read replica of your source RDS for MySQL DB instance. For more information, see [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica](AuroraMySQL.Migrating.RDSMySQL.Replica.md).  
You can't migrate to Aurora MySQL version 3.05 and higher from some older MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to MySQL version 8.0.28 before migrating.

The general steps you must take are as follows:

1. Determine the amount of space to provision for your Aurora MySQL DB cluster. For more information, see [How much space do I need?](#AuroraMySQL.Migrating.RDSMySQL.Space)

1. Use the console to create the snapshot in the AWS Region where the Amazon RDS MySQL instance is located. For information about creating a DB snapshot, see [Creating a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html).

1. If the DB snapshot is not in the same AWS Region as your DB cluster, use the Amazon RDS console to copy the DB snapshot to that AWS Region. For information about copying a DB snapshot, see [Copying a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html).

1. Use the console to migrate the DB snapshot and create an Aurora MySQL DB cluster with the same databases as the original MySQL DB instance. 

**Warning**  
Amazon RDS limits each AWS account to one snapshot copy into each AWS Region at a time.

## How much space do I need?
<a name="AuroraMySQL.Migrating.RDSMySQL.Space"></a>

When you migrate a snapshot of a MySQL DB instance into an Aurora MySQL DB cluster, Aurora uses an Amazon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it. In some cases, additional space is needed to format the data for migration.

Tables that are not MyISAM tables and are not compressed can be up to 16 TB in size. If you have MyISAM tables, then Aurora must use additional space in the volume to convert the tables to be compatible with Aurora MySQL. If you have compressed tables, then Aurora must use additional space in the volume to expand these tables before storing them on the Aurora cluster volume. Because of this additional space requirement, you should ensure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exceeds 8 TB in size.

## Reducing the amount of space required to migrate data into Amazon Aurora MySQL
<a name="AuroraMySQL.Migrating.RDSMySQL.PreImport"></a>

You might want to modify your database schema prior to migrating it into Amazon Aurora. Such modification can be helpful in the following cases: 
+ You want to speed up the migration process.
+ You are unsure of how much space you need to provision.
+ You have attempted to migrate your data and the migration has failed due to a lack of provisioned space.

You can make the following changes to improve the process of migrating a database into Amazon Aurora.

**Important**  
Be sure to perform these updates on a new DB instance restored from a snapshot of a production database, rather than on a production instance. You can then migrate the data from the snapshot of your new DB instance into your Aurora DB cluster to avoid any service interruptions on your production database.


| Table type | Limitation or guideline | 
| --- | --- | 
|  MyISAM tables  |  Aurora MySQL supports InnoDB tables only. If you have MyISAM tables in your database, then those tables must be converted before being migrated into Aurora MySQL. The conversion process requires additional space for the MyISAM to InnoDB conversion during the migration procedure. To reduce your chances of running out of space or to speed up the migration process, convert all of your MyISAM tables to InnoDB tables before migrating them. The size of the resulting InnoDB table is equivalent to the size required by Aurora MySQL for that table. To convert a MyISAM table to InnoDB, run the following command:  `alter table <schema>.<table_name> engine=innodb, algorithm=copy;`   | 
|  Compressed tables  |  Aurora MySQL doesn't support compressed tables (that is, tables created with `ROW_FORMAT=COMPRESSED`).  To reduce your chances of running out of space or to speed up the migration process, expand your compressed tables by setting `ROW_FORMAT` to `DEFAULT`, `COMPACT`, `DYNAMIC`, or `REDUNDANT`. For more information, see [InnoDB row formats](https://dev.mysql.com/doc/refman/8.0/en/innodb-row-format.html) in the MySQL documentation.  | 

You can use the following SQL script on your existing MySQL DB instance to list the tables in your database that are MyISAM tables or compressed tables.

```
-- This script examines a MySQL database for conditions that block
-- migrating the database into Amazon Aurora.
-- It needs to be run from an account that has read permission for the
-- INFORMATION_SCHEMA database.

-- Verify that this is a supported version of MySQL.

select msg as `==> Checking current version of MySQL.`
from
  (
  select
    'This script should be run on MySQL version 5.6 or higher. ' +
    'Earlier versions are not supported.' as msg,
    cast(substring_index(version(), '.', 1) as unsigned) * 100 +
      cast(substring_index(substring_index(version(), '.', 2), '.', -1)
      as unsigned)
    as major_minor
  ) as T
where major_minor <> 506;


-- List MyISAM and compressed tables. Include the table size.

select concat(TABLE_SCHEMA, '.', TABLE_NAME) as `==> MyISAM or Compressed Tables`,
round(((data_length + index_length) / 1024 / 1024), 2) "Approx size (MB)"
from INFORMATION_SCHEMA.TABLES
where
  ENGINE <> 'InnoDB'
  and
  (
    -- User tables
    TABLE_SCHEMA not in ('mysql', 'performance_schema',
                         'information_schema')
    or
    -- Non-standard system tables
    (
      TABLE_SCHEMA = 'mysql' and TABLE_NAME not in
        (
          'columns_priv', 'db', 'event', 'func', 'general_log',
          'help_category', 'help_keyword', 'help_relation',
          'help_topic', 'host', 'ndb_binlog_index', 'plugin',
          'proc', 'procs_priv', 'proxies_priv', 'servers', 'slow_log',
          'tables_priv', 'time_zone', 'time_zone_leap_second',
          'time_zone_name', 'time_zone_transition',
          'time_zone_transition_type', 'user'
        )
    )
  )
  or
  (
    -- Compressed tables
       ROW_FORMAT = 'Compressed'
  );
```

The script produces output similar to the output in the following example. The example shows two tables that must be converted from MyISAM to InnoDB. The output also includes the approximate size of each table in megabytes (MB). 

```
+---------------------------------+------------------+
| ==> MyISAM or Compressed Tables | Approx size (MB) |
+---------------------------------+------------------+
| test.name_table                 |          2102.25 |
| test.my_table                   |            65.25 |
+---------------------------------+------------------+
2 rows in set (0.01 sec)
```

## Migrating an RDS for MySQL DB snapshot to an Aurora MySQL DB cluster
<a name="migrate-snapshot-ams-cluster"></a>

You can migrate a DB snapshot of an RDS for MySQL DB instance to create an Aurora MySQL DB cluster using the AWS Management Console or the AWS CLI. The new Aurora MySQL DB cluster is populated with the data from the original RDS for MySQL DB instance. For information about creating a DB snapshot, see [Creating a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html).

If the DB snapshot is not in the AWS Region where you want to locate your data, copy the DB snapshot to that AWS Region. For information about copying a DB snapshot, see [Copying a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html).

### Console
<a name="AuroraMySQL.Migrating.RDSMySQL.Import.Console"></a>

When you migrate the DB snapshot by using the AWS Management Console, the console takes the actions necessary to create only the DB cluster.

You can also choose for your new Aurora MySQL DB cluster to be encrypted at rest using an AWS KMS key.

**To migrate a MySQL DB snapshot by using the AWS Management Console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Either start the migration from the MySQL DB instance or from the snapshot:

   To start the migration from the DB instance:

   1. In the navigation pane, choose **Databases**, and then select the MySQL DB instance.

   1. For **Actions**, choose **Migrate latest snapshot**.

   To start the migration from the snapshot:

   1. Choose **Snapshots**.

   1. On the **Snapshots** page, choose the snapshot that you want to migrate into an Aurora MySQL DB cluster.

   1. Choose **Snapshot Actions**, and then choose **Migrate Snapshot**.

   The **Migrate Database** page appears.

1. Set the following values on the **Migrate Database** page:
   + **Migrate to DB Engine**: Select `aurora`.
   + **DB Engine Version**: Select the DB engine version for the Aurora MySQL DB cluster.
   + **DB Instance Class**: Select a DB instance class that has the required storage and capacity for your database, for example `db.r3.large`. Aurora cluster volumes automatically grow as the amount of data in your database increases. An Aurora cluster volume can grow to a maximum size of 128 tebibytes (TiB). So you only need to select a DB instance class that meets your current storage requirements. For more information, see [Overview of Amazon Aurora storage](Aurora.Overview.StorageReliability.md#Aurora.Overview.Storage).
   + **DB Instance Identifier**: Type a name for the DB cluster that is unique for your account in the AWS Region you selected. This identifier is used in the endpoint addresses for the instances in your DB cluster. You might choose to add some intelligence to the name, such as including the AWS Region and DB engine you selected, for example **aurora-cluster1**.

     The DB instance identifier has the following constraints:
     + It must contain from 1 to 63 alphanumeric characters or hyphens.
     + Its first character must be a letter.
     + It cannot end with a hyphen or contain two consecutive hyphens.
     + It must be unique for all DB instances per AWS account, per AWS Region.
   + **Virtual Private Cloud (VPC)**: If you have an existing VPC, then you can use that VPC with your Aurora MySQL DB cluster by selecting your VPC identifier, for example `vpc-a464d1c1`. For information on creating a VPC, see [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md).

     Otherwise, you can choose to have Aurora create a VPC for you by selecting **Create a new VPC**. 
   + **DB subnet group**: If you have an existing subnet group, then you can use that subnet group with your Aurora MySQL DB cluster by selecting your subnet group identifier, for example `gs-subnet-group1`.

     Otherwise, you can choose to have Aurora create a subnet group for you by selecting **Create a new subnet group**. 
   + **Public accessibility**: Select **No** to specify that instances in your DB cluster can only be accessed by resources inside of your VPC. Select **Yes** to specify that instances in your DB cluster can be accessed by resources on the public network. The default is **Yes**.
**Note**  
Your production DB cluster might not need to be in a public subnet, because only your application servers require access to your DB cluster. If your DB cluster doesn't need to be in a public subnet, set **Publicly Accessible** to **No**.
   + **Availability Zone**: Select the Availability Zone to host the primary instance for your Aurora MySQL DB cluster. To have Aurora select an Availability Zone for you, select **No Preference**.
   + **Database Port**: Type the default port to be used when connecting to instances in the Aurora MySQL DB cluster. The default is `3306`.
**Note**  
You might be behind a corporate firewall that doesn't allow access to default ports such as the MySQL default port, 3306. In this case, provide a port value that your corporate firewall allows. Remember that port value later when you connect to the Aurora MySQL DB cluster.
   + **Encryption**: Choose **Enable Encryption** for your new Aurora MySQL DB cluster to be encrypted at rest. If you choose **Enable Encryption**, you must choose a KMS key as the **AWS KMS key** value.

     If your DB snapshot isn't encrypted, specify an encryption key to have your DB cluster encrypted at rest.

     If your DB snapshot is encrypted, specify an encryption key to have your DB cluster encrypted at rest using the specified encryption key. You can specify the encryption key used by the DB snapshot or a different key. You can't create an unencrypted DB cluster from an encrypted DB snapshot.
   + **Auto Minor Version Upgrade**: This setting doesn't apply to Aurora MySQL DB clusters.

     For more information about engine updates for Aurora MySQL, see [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md).

1. Choose **Migrate** to migrate your DB snapshot. 

1. Choose **Instances**, and then choose the arrow icon to show the DB cluster details and monitor the progress of the migration. On the details page, you can find the cluster endpoint used to connect to the primary instance of the DB cluster. For more information on connecting to an Aurora MySQL DB cluster, see [Connecting to an Amazon Aurora DB cluster](Aurora.Connecting.md). 

### AWS CLI
<a name="USER_ImportAuroraCluster.CLI"></a>

You can create an Aurora DB cluster from a DB snapshot of an RDS for MySQL DB instance by using the [https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html) command with the following parameters:
+ `--db-cluster-identifier` – The name of the DB cluster to create.
+ `--engine aurora-mysql` – For a MySQL 5.7–compatible or 8.0–compatible DB cluster.
+ `--kms-key-id` – The AWS KMS key to optionally encrypt the DB cluster with, depending on whether your DB snapshot is encrypted.
  + If your DB snapshot isn't encrypted, specify an encryption key to have your DB cluster encrypted at rest. Otherwise, your DB cluster isn't encrypted.
  + If your DB snapshot is encrypted, specify an encryption key to have your DB cluster encrypted at rest using the specified encryption key. Otherwise, your DB cluster is encrypted at rest using the encryption key for the DB snapshot.
**Note**  
You can't create an unencrypted DB cluster from an encrypted DB snapshot.
+ `--snapshot-identifier` – The Amazon Resource Name (ARN) of the DB snapshot to migrate. For more information about Amazon RDS ARNs, see [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-rds).

When you migrate the DB snapshot by using the `RestoreDBClusterFromSnapshot` command, the command creates both the DB cluster and the primary instance.

In this example, you create a MySQL 5.7–compatible DB cluster named *mydbcluster* from a DB snapshot with an ARN set to *mydbsnapshotARN*.

For Linux, macOS, or Unix:

```
aws rds restore-db-cluster-from-snapshot \
    --db-cluster-identifier mydbcluster \
    --snapshot-identifier mydbsnapshotARN \
    --engine aurora-mysql
```

For Windows:

```
aws rds restore-db-cluster-from-snapshot ^
    --db-cluster-identifier mydbcluster ^
    --snapshot-identifier mydbsnapshotARN ^
    --engine aurora-mysql
```

In this example, you create a MySQL 5.7–compatible DB cluster named *mydbcluster* from a DB snapshot with an ARN set to *mydbsnapshotARN*.

For Linux, macOS, or Unix:

```
aws rds restore-db-cluster-from-snapshot \
    --db-cluster-identifier mydbcluster \
    --snapshot-identifier mydbsnapshotARN \
    --engine aurora-mysql
```

For Windows:

```
aws rds restore-db-cluster-from-snapshot ^
    --db-cluster-identifier mydbcluster ^
    --snapshot-identifier mydbsnapshotARN ^
    --engine aurora-mysql
```

# Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster by using an Aurora read replica
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica"></a>

Aurora uses the MySQL DB engines' binary log replication functionality to create a special type of DB cluster called an Aurora read replica for a source RDS for MySQL DB instance. Updates made to the source RDS for MySQL DB instance are asynchronously replicated to the Aurora read replica.

We recommend using this functionality to migrate from a RDS for MySQL DB instance to an Aurora MySQL DB cluster by creating an Aurora read replica of your source RDS for MySQL DB instance. When the replica lag between the RDS for MySQL DB instance and the Aurora read replica is 0, you can direct your client applications to the Aurora read replica and then stop replication to make the Aurora read replica a standalone Aurora MySQL DB cluster. Be prepared for migration to take a while, roughly several hours per tebibyte (TiB) of data.

For a list of regions where Aurora is available, see [Amazon Aurora](https://docs.aws.amazon.com/general/latest/gr/rande.html#aurora) in the *AWS General Reference*.

When you create an Aurora read replica of an RDS for MySQL DB instance, Amazon RDS creates a DB snapshot of your source RDS for MySQL DB instance (private to Amazon RDS, and incurring no charges). Amazon RDS then migrates the data from the DB snapshot to the Aurora read replica. After the data from the DB snapshot has been migrated to the new Aurora MySQL DB cluster, Amazon RDS starts replication between your RDS for MySQL DB instance and the Aurora MySQL DB cluster. If your RDS for MySQL DB instance contains tables that use storage engines other than InnoDB, or that use compressed row format, you can speed up the process of creating an Aurora read replica by altering those tables to use the InnoDB storage engine and dynamic row format before you create your Aurora read replica. For more information about the process of copying a MySQL DB snapshot to an Aurora MySQL DB cluster, see [Migrating data from an RDS for MySQL DB instance to an Amazon Aurora MySQL DB cluster](AuroraMySQL.Migrating.RDSMySQL.md).

You can have only one Aurora read replica for an RDS for MySQL DB instance.

**Note**  
Replication issues can arise due to feature differences between Aurora MySQL and the MySQL database engine version of your RDS for MySQL DB instance that is the replication primary. If you encounter an error, you can find help in the [Amazon RDS community forum](https://forums.aws.amazon.com/forum.jspa?forumID=60) or by contacting AWS Support.  
You can't create an Aurora read replica if your RDS for MySQL DB instance is already the source for a cross-Region read replica.  
You can't migrate to Aurora MySQL version 3.05 and higher from some older RDS for MySQL 8.0 versions, including 8.0.11, 8.0.13, and 8.0.15. We recommend that you upgrade to RDS for MySQL version 8.0.28 before migrating.

For more information on MySQL read replicas, see [ Working with read replicas of MariaDB, MySQL, and PostgreSQL DB instances](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html).

## Creating an Aurora read replica
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.Create"></a>

You can create an Aurora read replica for an RDS for MySQL DB instance by using the console, the AWS CLI, or the RDS API.

### Console
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.Create.Console"></a>

**To create an Aurora read replica from a source RDS for MySQL DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose the MySQL DB instance that you want to use as the source for your Aurora read replica.

1. For **Actions**, choose **Create Aurora read replica**.

1. Choose the DB cluster specifications you want to use for the Aurora read replica, as described in the following table.     
<a name="aurora_read_replica_param_advice"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Replica.html)

1. Choose **Create read replica**.

### AWS CLI
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.Create.CLI"></a>

To create an Aurora read replica from a source RDS for MySQL DB instance, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) and [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI commands to create a new Aurora MySQL DB cluster. When you call the `create-db-cluster` command, include the `--replication-source-identifier` parameter to identify the Amazon Resource Name (ARN) for the source MySQL DB instance. For more information about Amazon RDS ARNs, see [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-rds).

Don't specify the master username, master password, or database name as the Aurora read replica uses the same master username, master password, and database name as the source MySQL DB instance. 

For Linux, macOS, or Unix:

```
aws rds create-db-cluster --db-cluster-identifier sample-replica-cluster --engine aurora \
    --db-subnet-group-name mysubnetgroup --vpc-security-group-ids sg-c7e5b0d2 \
    --replication-source-identifier arn:aws:rds:us-west-2:123456789012:db:primary-mysql-instance
```

For Windows:

```
aws rds create-db-cluster --db-cluster-identifier sample-replica-cluster --engine aurora ^
    --db-subnet-group-name mysubnetgroup --vpc-security-group-ids sg-c7e5b0d2 ^
    --replication-source-identifier arn:aws:rds:us-west-2:123456789012:db:primary-mysql-instance
```

If you use the console to create an Aurora read replica, then Aurora automatically creates the primary instance for your DB cluster Aurora read replica. If you use the AWS CLI to create an Aurora read replica, you must explicitly create the primary instance for your DB cluster. The primary instance is the first instance that is created in a DB cluster.

You can create a primary instance for your DB cluster by using the [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI command with the following parameters.
+ `--db-cluster-identifier`

  The name of your DB cluster.
+ `--db-instance-class`

  The name of the DB instance class to use for your primary instance.
+ `--db-instance-identifier`

  The name of your primary instance.
+ `--engine aurora`

In this example, you create a primary instance named *myreadreplicainstance* for the DB cluster named *myreadreplicacluster*, using the DB instance class specified in *myinstanceclass*.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds create-db-instance \
    --db-cluster-identifier myreadreplicacluster \
    --db-instance-class myinstanceclass \
    --db-instance-identifier myreadreplicainstance \
    --engine aurora
```
For Windows:  

```
aws rds create-db-instance ^
    --db-cluster-identifier myreadreplicacluster ^
    --db-instance-class myinstanceclass ^
    --db-instance-identifier myreadreplicainstance ^
    --engine aurora
```

### RDS API
<a name="Aurora.Migration.RDSMySQL.Create.API"></a>

To create an Aurora read replica from a source RDS for MySQL DB instance, use the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) and [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) Amazon RDS API commands to create a new Aurora DB cluster and primary instance. Do not specify the master username, master password, or database name as the Aurora read replica uses the same master username, master password, and database name as the source RDS for MySQL DB instance. 

You can create a new Aurora DB cluster for an Aurora read replica from a source RDS for MySQL DB instance by using the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) Amazon RDS API command with the following parameters:
+ `DBClusterIdentifier`

  The name of the DB cluster to create.
+ `DBSubnetGroupName`

  The name of the DB subnet group to associate with this DB cluster.
+ `Engine=aurora`
+ `KmsKeyId`

  The AWS KMS key to optionally encrypt the DB cluster with, depending on whether your MySQL DB instance is encrypted.
  + If your MySQL DB instance isn't encrypted, specify an encryption key to have your DB cluster encrypted at rest. Otherwise, your DB cluster is encrypted at rest using the default encryption key for your account.
  + If your MySQL DB instance is encrypted, specify an encryption key to have your DB cluster encrypted at rest using the specified encryption key. Otherwise, your DB cluster is encrypted at rest using the encryption key for the MySQL DB instance.
**Note**  
You can't create an unencrypted DB cluster from an encrypted MySQL DB instance.
+ `ReplicationSourceIdentifier`

  The Amazon Resource Name (ARN) for the source MySQL DB instance. For more information about Amazon RDS ARNs, see [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-rds). 
+ `VpcSecurityGroupIds`

  The list of EC2 VPC security groups to associate with this DB cluster.

In this example, you create a DB cluster named *myreadreplicacluster* from a source MySQL DB instance with an ARN set to *mysqlprimaryARN*, associated with a DB subnet group named *mysubnetgroup* and a VPC security group named *mysecuritygroup*.

**Example**  

```
https://rds.us-east-1.amazonaws.com/
    ?Action=CreateDBCluster
    &DBClusterIdentifier=myreadreplicacluster
    &DBSubnetGroupName=mysubnetgroup
    &Engine=aurora
    &ReplicationSourceIdentifier=mysqlprimaryARN
    &SignatureMethod=HmacSHA256
    &SignatureVersion=4
    &Version=2014-10-31
    &VpcSecurityGroupIds=mysecuritygroup
    &X-Amz-Algorithm=AWS4-HMAC-SHA256
    &X-Amz-Credential=AKIADQKE4SARGYLE/20150927/us-east-1/rds/aws4_request
    &X-Amz-Date=20150927T164851Z
    &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
    &X-Amz-Signature=6a8f4bd6a98f649c75ea04a6b3929ecc75ac09739588391cd7250f5280e716db
```

If you use the console to create an Aurora read replica, then Aurora automatically creates the primary instance for your DB cluster Aurora read replica. If you use the AWS CLI to create an Aurora read replica, you must explicitly create the primary instance for your DB cluster. The primary instance is the first instance that is created in a DB cluster.

You can create a primary instance for your DB cluster by using the [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) Amazon RDS API command with the following parameters:
+ `DBClusterIdentifier`

  The name of your DB cluster.
+ `DBInstanceClass`

  The name of the DB instance class to use for your primary instance.
+ `DBInstanceIdentifier`

  The name of your primary instance.
+ `Engine=aurora`

In this example, you create a primary instance named *myreadreplicainstance* for the DB cluster named *myreadreplicacluster*, using the DB instance class specified in *myinstanceclass*.

**Example**  

```
https://rds.us-east-1.amazonaws.com/
    ?Action=CreateDBInstance
    &DBClusterIdentifier=myreadreplicacluster
    &DBInstanceClass=myinstanceclass
    &DBInstanceIdentifier=myreadreplicainstance
    &Engine=aurora
    &SignatureMethod=HmacSHA256
    &SignatureVersion=4
    &Version=2014-09-01
    &X-Amz-Algorithm=AWS4-HMAC-SHA256
    &X-Amz-Credential=AKIADQKE4SARGYLE/20140424/us-east-1/rds/aws4_request
    &X-Amz-Date=20140424T194844Z
    &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
    &X-Amz-Signature=bee4aabc750bf7dad0cd9e22b952bd6089d91e2a16592c2293e532eeaab8bc77
```

## Viewing an Aurora read replica
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.View"></a>

You can view the MySQL to Aurora MySQL replication relationships for your Aurora MySQL DB clusters by using the AWS Management Console or the AWS CLI.

### Console
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.View.Console"></a>

**To view the primary MySQL DB instance for an Aurora read replica**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**. 

1. Choose the DB cluster for the Aurora read replica to display its details. The primary MySQL DB instance information is in the **Replication source** field.  
![\[View MySQL primary\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-repl6.png)

### AWS CLI
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.View.CLI"></a>

To view the MySQL to Aurora MySQL replication relationships for your Aurora MySQL DB clusters by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) and [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) commands. 

To determine which MySQL DB instance is the primary, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) and specify the cluster identifier of the Aurora read replica for the `--db-cluster-identifier` option. Refer to the `ReplicationSourceIdentifier` element in the output for the ARN of the DB instance that is the replication primary. 

To determine which DB cluster is the Aurora read replica, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) and specify the instance identifier of the MySQL DB instance for the `--db-instance-identifier` option. Refer to the `ReadReplicaDBClusterIdentifiers` element in the output for the DB cluster identifier of the Aurora read replica. 

**Example**  
For Linux, macOS, or Unix:  

```
aws rds describe-db-clusters \
    --db-cluster-identifier myreadreplicacluster
```

```
aws rds describe-db-instances \
    --db-instance-identifier mysqlprimary
```
For Windows:  

```
aws rds describe-db-clusters ^
    --db-cluster-identifier myreadreplicacluster
```

```
aws rds describe-db-instances ^
    --db-instance-identifier mysqlprimary
```

## Promoting an Aurora read replica
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.Promote"></a>

After migration completes, you can promote the Aurora read replica to a stand-alone DB cluster using the AWS Management Console or AWS CLI.

Then you can direct your client applications to the endpoint for the Aurora read replica. For more information on the Aurora endpoints, see [Amazon Aurora endpoint connections](Aurora.Overview.Endpoints.md). Promotion should complete fairly quickly, and you can read from and write to the Aurora read replica during promotion. However, you can't delete the primary MySQL DB instance or unlink the DB Instance and the Aurora read replica during this time.

Before you promote your Aurora read replica, stop any transactions from being written to the source MySQL DB instance, and then wait for the replica lag on the Aurora read replica to reach 0. You can view the replica lag for an Aurora read replica by calling the `SHOW SLAVE STATUS` (Aurora MySQL version 2) or `SHOW REPLICA STATUS` (Aurora MySQL version 3) command on your Aurora read replica. Check the **Seconds behind master** value. 

You can start writing to the Aurora read replica after write transactions to the primary have stopped and replica lag is 0. If you write to the Aurora read replica before this and you modify tables that are also being modified on the MySQL primary, you risk breaking replication to Aurora. If this happens, you must delete and recreate your Aurora read replica.

### Console
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.Promote.Console"></a>

**To promote an Aurora read replica to an Aurora DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB cluster for the Aurora read replica.

1. For **Actions**, choose **Promote**.

1. Choose **Promote read replica**.

After you promote, confirm that the promotion has completed by using the following procedure.

**To confirm that the Aurora read replica was promoted**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Events**.

1. On the **Events** page, verify that there is a `Promoted Read Replica cluster to a stand-alone database cluster` event for the cluster that you promoted.

After promotion is complete, the primary MySQL DB instance and the Aurora read replica are unlinked, and you can safely delete the DB instance if you want.

### AWS CLI
<a name="AuroraMySQL.Migrating.RDSMySQL.Replica.Promote.CLI"></a>

To promote an Aurora read replica to a stand-alone DB cluster, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/promote-read-replica-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/promote-read-replica-db-cluster.html) AWS CLI command. 

**Example**  
For Linux, macOS, or Unix:  

```
aws rds promote-read-replica-db-cluster \
    --db-cluster-identifier myreadreplicacluster
```
For Windows:  

```
aws rds promote-read-replica-db-cluster ^
    --db-cluster-identifier myreadreplicacluster
```