Cloning a volume for an Amazon Aurora DB cluster - Amazon Aurora

Cloning a volume for an Amazon Aurora DB cluster

By using Aurora cloning, you can create a new cluster that initially shares the same data pages as the original, but is a separate and independent volume. The process is designed to be fast and cost-effective. The new cluster with its associated data volume is known as a clone. Creating a clone is faster and more space-efficient than physically copying the data using other techniques, such as restoring a snapshot.

Overview of Aurora cloning

Aurora uses a copy-on-write protocol to create a clone. This mechanism uses minimal additional space to create an initial clone. When the clone is first created, Aurora keeps a single copy of the data that is used by the source Aurora DB cluster and the new (cloned) Aurora DB cluster. Additional storage is allocated only when changes are made to data (on the Aurora storage volume) by the source Aurora DB cluster or the Aurora DB cluster clone. To learn more about the copy-on-write protocol, see How Aurora cloning works.

Aurora cloning is especially useful for quickly setting up test environments using your production data, without risking data corruption. You can use clones for many types of applications, such as the following:

  • Experiment with potential changes (schema changes and parameter group changes, for example) to assess all impacts.

  • Run workload-intensive operations, such as exporting data or running analytical queries on the clone.

  • Create a copy of your production DB cluster for development, testing, or other purposes.

You can create more than one clone from the same Aurora DB cluster. You can also create multiple clones from another clone.

After creating an Aurora clone, you can configure the Aurora DB instances differently from the source Aurora DB cluster. For example, you might not need a clone for development purposes to meet the same high availability requirements as the source production Aurora DB cluster. In this case, you can configure the clone with a single Aurora DB instance rather than the multiple DB instances used by the Aurora DB cluster.

When you create a clone using a different deployment configuration from the source, the clone is created using the latest minor version of the source's Aurora DB engine.

When you create clones from your Aurora DB clusters, the clones are created in your AWS account—the same account that owns the source Aurora DB cluster. However, you can also share Aurora Serverless v2 and provisioned Aurora DB clusters and clones with other AWS accounts. For more information, see Cross-account cloning with AWS RAM and Amazon Aurora.

When you finish using the clone for your testing, development, or other purposes, you can delete it.

Limitations of Aurora cloning

Aurora cloning currently has the following limitations:

  • You can create as many clones as you want, up to the maximum number of DB clusters allowed in the AWS Region.

    You can create the clones using the copy-on-write protocol or the full-copy protocol. The full-copy protocol acts like a point-in-time recovery.

  • You can't create a clone in a different AWS Region from the source Aurora DB cluster.

  • You can't create a clone from an Aurora DB cluster without the parallel query feature to a cluster that uses parallel query. To bring data into a cluster that uses parallel query, create a snapshot of the original cluster and restore it to the cluster that's using the parallel query feature.

  • You can't create a clone from an Aurora DB cluster that has no DB instances. You can only clone Aurora DB clusters that have at least one DB instance.

  • You can create a clone in a different virtual private cloud (VPC) than that of the Aurora DB cluster. If you do, the subnets of the VPCs must map to the same Availability Zones.

  • You can create an Aurora provisioned clone from a provisioned Aurora DB cluster.

  • Clusters with Aurora Serverless v2 instances follow the same rules as provisioned clusters.

  • For Aurora Serverless v1:

    • You can create a provisioned clone from an Aurora Serverless v1 DB cluster.

    • You can create an Aurora Serverless v1 clone from an Aurora Serverless v1 or provisioned DB cluster.

    • You can't create an Aurora Serverless v1 clone from an unencrypted, provisioned Aurora DB cluster.

    • Cross-account cloning currently doesn't support cloning Aurora Serverless v1 DB clusters. For more information, see Limitations of cross-account cloning.

    • A cloned Aurora Serverless v1 DB cluster has the same behavior and limitations as any Aurora Serverless v1 DB cluster. For more information, see Using Amazon Aurora Serverless v1.

    • Aurora Serverless v1 DB clusters are always encrypted. When you clone an Aurora Serverless v1 DB cluster into a provisioned Aurora DB cluster, the provisioned Aurora DB cluster is encrypted. You can choose the encryption key, but you can't disable the encryption. To clone from a provisioned Aurora DB cluster to an Aurora Serverless v1, you must start with an encrypted provisioned Aurora DB cluster.

How Aurora cloning works

Aurora cloning works at the storage layer of an Aurora DB cluster. It uses a copy-on-write protocol that's both fast and space-efficient in terms of the underlying durable media supporting the Aurora storage volume. You can learn more about Aurora cluster volumes in the Overview of Amazon Aurora storage.

Understanding the copy-on-write protocol

An Aurora DB cluster stores data in pages in the underlying Aurora storage volume.

For example, in the following diagram you can find an Aurora DB cluster (A) that has four data pages, 1, 2, 3, and 4. Imagine that a clone, B, is created from the Aurora DB cluster. When the clone is created, no data is copied. Rather, the clone points to the same set of pages as the source Aurora DB cluster.

Amazon Aurora cluster volume with 4 pages for source cluster, A, and clone, B

When the clone is created, no additional storage is usually needed. The copy-on-write protocol uses the same segment on the physical storage media as the source segment. Additional storage is required only if the capacity of the source segment isn't sufficient for the entire clone segment. If that's the case, the source segment is copied to another physical device.

In the following diagrams, you can find an example of the copy-on-write protocol in action using the same cluster A and its clone, B, as shown preceding. Let's say that you make a change to your Aurora DB cluster (A) that results in a change to data held on page 1. Instead of writing to the original page 1, Aurora creates a new page 1[A]. The Aurora DB cluster volume for cluster (A) now points to page 1[A], 2, 3, and 4, while the clone (B) still references the original pages.

Amazon Aurora source DB cluster volume and its clone, both with changes.

On the clone, a change is made to page 4 on the storage volume. Instead of writing to the original page 4, Aurora creates a new page, 4[B]. The clone now points to pages 1, 2, 3, and to page 4[B], while the cluster (A) continues pointing to 1[A], 2, 3, and 4.

Amazon Aurora source DB cluster volume and its clone, both with changes.

As more changes occur over time in both the source Aurora DB cluster volume and the clone, more storage is needed to capture and store the changes.

Deleting a source cluster volume

Initially, the clone volume shares the same data pages as the original volume from which the clone is created. As long as the original volume exists, the clone volume is only considered the owner of the pages that the clone created or modified. Thus, the VolumeBytesUsed metric for the clone volume starts out small and only grows as the data diverges between the original cluster and the clone. For pages that are identical between the source volume and the clone, the storage charges apply only to the original cluster. For more information about the VolumeBytesUsed metric, see Cluster-level metrics for Amazon Aurora.

When you delete a source cluster volume that has one or more clones associated with it, the data in the cluster volumes of the clones isn't changed. Aurora preserves the pages that were previously owned by the source cluster volume. Aurora redistributes the storage billing for the pages that were owned by the deleted cluster. For example, suppose that an original cluster had two clones and then the original cluster was deleted. Half of the data pages owned by the original cluster would now be owned by one clone. The other half of the pages would be owned by the other clone.

If you delete the original cluster, then as you create or delete more clones, Aurora continues to redistribute ownership of the data pages among all the clones that share the same pages. Thus, you might find that the value of the VolumeBytesUsed metric changes for the cluster volume of a clone. The metric value can decrease as more clones are created and page ownership is spread across more clusters. The metric value can also increase as clones are deleted and page ownership is assigned to a smaller number of clusters. For information about how write operations affect data pages on clone volumes, see Understanding the copy-on-write protocol.

When the original cluster and the clones are owned by the same AWS account, all the storage charges for those clusters apply to that same AWS account. If some of the clusters are cross-account clones, deleting the original cluster can result in additional storage charges to the AWS accounts that own the cross-account clones.

For example, suppose that a cluster volume has 1000 used data pages before you create any clones. When you clone that cluster, initially the clone volume has zero used pages. If the clone makes modifications to 100 data pages, only those 100 pages are stored on the clone volume and marked as used. The other 900 unchanged pages from the parent volume are shared by both clusters. In this case, the parent cluster has storage charges for 1000 pages and the clone volume for 100 pages.

If you delete the source volume, the storage charges for the clone include the 100 pages that it changed, plus the 900 shared pages from the original volume, for a total of 1000 pages.

Creating an Amazon Aurora clone

You can create a clone in the same AWS account as the source Aurora DB cluster. To do so, you can use the AWS Management Console or the AWS CLI and the procedures following.

To allow another AWS account to create a clone or to share a clone with another AWS account, use the procedures in Cross-account cloning with AWS RAM and Amazon Aurora.

The following procedure describes how to clone an Aurora DB cluster using the AWS Management Console.

Creating a clone using the AWS Management Console results in an Aurora DB cluster with one Aurora DB instance.

These instructions apply for DB clusters owned by the same AWS account that is creating the clone. If the DB cluster is owned by a different AWS account, see Cross-account cloning with AWS RAM and Amazon Aurora instead.

To create a clone of a DB cluster owned by your AWS account using the AWS Management Console
  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. In the navigation pane, choose Databases.

  3. Choose your Aurora DB cluster from the list, and for Actions, choose Create clone.

    Creating a clone starts by selecting your Aurora DB cluster.

    The Create clone page opens, where you can configure Settings, Connectivity, and other options for the Aurora DB cluster clone.

  4. For DB instance identifier, enter the name that you want to give to your cloned Aurora DB cluster.

  5. For Aurora Serverless v1 DB clusters, choose Provisioned or Serverless for Capacity type.

    You can choose Serverless only if the source Aurora DB cluster is an Aurora Serverless v1 DB cluster or is a provisioned Aurora DB cluster that is encrypted.

  6. For Aurora Serverless v2 or provisioned DB clusters, choose either Aurora I/O-Optimized or Aurora Standard for Cluster storage configuration.

    For more information, see Storage configurations for Amazon Aurora DB clusters.

  7. Choose the DB instance size or DB cluster capacity:

    • For a provisioned clone, choose a DB instance class.

      To create a provisioned clone, specify the DB instance size.

      You can accept the provided setting, or you can use a different DB instance class for your clone.

    • For an Aurora Serverless v1 or Aurora Serverless v2 clone, choose the Capacity settings.

      To create a Serverless clone from an Aurora DB cluster, specify the capacity.

      You can accept the provided settings, or you can change them for your clone.

  8. Choose other settings as needed for your clone. To learn more about Aurora DB cluster and instance settings, see Creating an Amazon Aurora DB cluster.

  9. Choose Create clone.

When the clone is created, it's listed with your other Aurora DB clusters in the console Databases section and displays its current state. Your clone is ready to use when its state is Available.

Using the AWS CLI for cloning your Aurora DB cluster involves a couple of steps.

The restore-db-cluster-to-point-in-time AWS CLI command that you use results in an empty Aurora DB cluster with 0 Aurora DB instances. That is, the command restores only the Aurora DB cluster, not the DB instances for that cluster. You do that separately after the clone is available. The two steps in the process are as follows:

  1. Create the clone by using the restore-db-cluster-to-point-in-time CLI command. The parameters that you use with this command control the capacity type and other details of the empty Aurora DB cluster (clone) being created.

  2. Create the Aurora DB instance for the clone by using the create-db-instance CLI command to recreate the Aurora DB instance in the restored Aurora DB cluster.

Creating the clone

The specific parameters that you pass to the restore-db-cluster-to-point-in-time CLI command vary. What you pass depends on the engine-mode type of the source DB cluster—Serverless or Provisioned—and the type of clone that you want to create.

To create a clone of the same engine mode as the source Aurora DB cluster
  • Use the restore-db-cluster-to-point-in-time CLI command and specify values for the following parameters:

    • --db-cluster-identifier – Choose a meaningful name for your clone. You name the clone when you use the restore-db-cluster-to-point-in-time CLI command. You then pass the name of the clone in the create-db-instance CLI command.

    • --restore-type – Use copy-on-write to create a clone of the source DB cluster. Without this parameter, the restore-db-cluster-to-point-in-time restores the Aurora DB cluster rather than creating a clone.

    • --source-db-cluster-identifier – Use the name of the source Aurora DB cluster that you want to clone.

    • --use-latest-restorable-time – This value points to the latest restorable volume data for the source DB cluster. Use it to create clones.

The following example creates a clone named my-clone from a cluster named my-source-cluster.

For Linux, macOS, or Unix:

aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier my-source-cluster \ --db-cluster-identifier my-clone \ --restore-type copy-on-write \ --use-latest-restorable-time

For Windows:

aws rds restore-db-cluster-to-point-in-time ^ --source-db-cluster-identifier my-source-cluster ^ --db-cluster-identifier my-clone ^ --restore-type copy-on-write ^ --use-latest-restorable-time

The command returns the JSON object containing details of the clone. Check to make sure that your cloned DB cluster is available before trying to create the DB instance for your clone. For more information, see Checking the status and getting clone details.

To create a clone with a different engine mode from the source Aurora DB cluster
  • Use the restore-db-cluster-to-point-in-time CLI command and specify values for the following parameters:

    • --db-cluster-identifier – Choose a meaningful name for your clone. You name the clone when you use the restore-db-cluster-to-point-in-time CLI command. You then pass the name of the clone in the create-db-instance CLI command.

    • --source-db-cluster-identifier – Use the name of the source Aurora DB cluster that you want to clone.

    • --restore-type – Use copy-on-write to create a clone of the source DB cluster. Without this parameter, the restore-db-cluster-to-point-in-time restores the Aurora DB cluster rather than creating a clone.

    • --use-latest-restorable-time – This value points to the latest restorable volume data for the source DB cluster. Use it to create clones.

    • --engine-mode – (Optional) Use this parameter only to create clones that are of a different type from the source Aurora DB cluster. Choose the value to pass with --engine-mode as follows:

      • Use provisioned to create a provisioned Aurora DB cluster clone from an Aurora Serverless DB cluster.

      • Use serverless to create an Aurora Serverless v1 DB cluster clone from a provisioned Aurora DB cluster. When you specify the serverless engine mode, you can also choose the --scaling-configuration.

    • --scaling-configuration – (Optional) Use with --engine-mode serverless to configure the minimum and maximum capacity for an Aurora Serverless v1 clone. If you don't use this parameter, Aurora creates the clone using the default capacity values for the DB engine.

    • --serverless-v2-scaling-configuration – (Optional) Use this parameter to configure the minimum and maximum capacity for an Aurora Serverless v2 clone. If you don't use this parameter, Aurora creates the clone using the default capacity values for the DB engine.

The following example creates an Aurora Serverless v1 clone named my-clone, from a provisioned Aurora DB cluster named my-source-cluster. The provisioned Aurora DB cluster is encrypted.

For Linux, macOS, or Unix:

aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier my-source-cluster \ --db-cluster-identifier my-clone \ --engine-mode serverless \ --scaling-configuration MinCapacity=8,MaxCapacity=64 \ --restore-type copy-on-write \ --use-latest-restorable-time

For Windows:

aws rds restore-db-cluster-to-point-in-time ^ --source-db-cluster-identifier my-source-cluster ^ --db-cluster-identifier my-clone ^ --engine-mode serverless ^ --scaling-configuration MinCapacity=8,MaxCapacity=64 ^ --restore-type copy-on-write ^ --use-latest-restorable-time

These commands return the JSON object containing details of the clone that you need to create the DB instance. You can't do that until the status of the clone (the empty Aurora DB cluster) has the status Available.

Note

The restore-db-cluster-to-point-in-time AWS CLI command only restores the DB cluster, not the DB instances for that DB cluster. You must invoke the create-db-instance command to create DB instances for the restored DB cluster, specifying the identifier of the restored DB cluster in --db-cluster-identifier. You can create DB instances only after the restore-db-cluster-to-point-in-time command has completed and the DB cluster is available.

For example, suppose you have a cluster named tpch100g that you want to clone. The following Linux example creates a cloned cluster named tpch100g-clone and a primary instance named tpch100g-clone-instance for the new cluster. You don't need to supply some parameters, such as --master-username and --master-user-password. Aurora automatically determines those from the original cluster. You do need to specify the DB engine to use. Thus, the example tests the new cluster to determine the right value to use for the --engine parameter.

$ aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier tpch100g \ --db-cluster-identifier tpch100g-clone \ --restore-type copy-on-write \ --use-latest-restorable-time $ aws rds describe-db-clusters \ --db-cluster-identifier tpch100g-clone \ --query '*[].[Engine]' \ --output text aurora-mysql $ aws rds create-db-instance \ --db-instance-identifier tpch100g-clone-instance \ --db-cluster-identifier tpch100g-clone \ --db-instance-class db.r5.4xlarge \ --engine aurora-mysql

Checking the status and getting clone details

You can use the following command to check the status of your newly created empty DB cluster.

$ aws rds describe-db-clusters --db-cluster-identifier my-clone --query '*[].[Status]' --output text

Or you can obtain the status and the other values that you need to create the DB instance for your clone by using the following AWS CLI query.

For Linux, macOS, or Unix:

aws rds describe-db-clusters --db-cluster-identifier my-clone \ --query '*[].{Status:Status,Engine:Engine,EngineVersion:EngineVersion,EngineMode:EngineMode}'

For Windows:

aws rds describe-db-clusters --db-cluster-identifier my-clone ^ --query "*[].{Status:Status,Engine:Engine,EngineVersion:EngineVersion,EngineMode:EngineMode}"

This query returns output similar to the following.

[ { "Status": "available", "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.04.1", "EngineMode": "provisioned" } ]

Creating the Aurora DB instance for your clone

Use the create-db-instance CLI command to create the DB instance for your Aurora Serverless v2 or provisioned clone. You don't create a DB instance for an Aurora Serverless v1 clone.

The DB instance inherits the --master-username and --master-user-password properties from the source DB cluster.

The following example creates a DB instance for a provisioned clone.

For Linux, macOS, or Unix:

aws rds create-db-instance \ --db-instance-identifier my-new-db \ --db-cluster-identifier my-clone \ --db-instance-class db.r5.4xlarge \ --engine aurora-mysql

For Windows:

aws rds create-db-instance ^ --db-instance-identifier my-new-db ^ --db-cluster-identifier my-clone ^ --db-instance-class db.r5.4xlarge ^ --engine aurora-mysql

Parameters to use for cloning

The following table summarizes the various parameters used with restore-db-cluster-to-point-in-time to clone Aurora DB clusters.

Parameter Description

--source-db-cluster-identifier

Use the name of the source Aurora DB cluster that you want to clone.

--db-cluster-identifier

Choose a meaningful name for your clone when you create it with the restore-db-cluster-to-point-in-time command. Then you pass this name to the create-db-instance command.

--restore-type

Specify copy-on-write as the --restore-type to create a clone of the source DB cluster rather than restoring the source Aurora DB cluster.

--use-latest-restorable-time

This value points to the latest restorable volume data for the source DB cluster. Use it to create clones.

--engine-mode

Use this parameter to create clones that are of a different type from the source Aurora DB cluster, with one of the following values:

  • Use provisioned to create a provisioned or Aurora Serverless v2 clone from an Aurora Serverless v1 DB cluster.

  • Use serverless to create an Aurora Serverless v1 clone from a provisioned or Aurora Serverless v2 DB cluster.

    When you specify the serverless engine mode, you can also choose the --scaling-configuration.

--scaling-configuration

Use this parameter to configure the minimum and maximum capacity for an Aurora Serverless v1 clone. If you don't specify this parameter, Aurora creates the clone using the default capacity values for the DB engine.

--serverless-v2-scaling-configuration

Use this parameter to configure the minimum and maximum capacity for an Aurora Serverless v2 clone. If you don't specify this parameter, Aurora creates the clone using the default capacity values for the DB engine.

Cross-VPC cloning with Amazon Aurora

Suppose that you want to impose different network access controls on the original cluster and the clone. For example, you might use cloning to make a copy of a production Aurora cluster in a different VPC for development and testing. Or you might create a clone as part of a migration from public subnets to private subnets, to enhance your database security.

The following sections demonstrate how to set up the network configuration for the clone so that the original cluster and the clone can both access the same Aurora storage nodes, even from different subnets or different VPCs. Verifying the network resources in advance can avoid errors during cloning that might be difficult to diagnose.

If you aren’t familiar with how Aurora interacts with VPCs, subnets, and DB subnet groups, see Amazon VPC VPCs and Amazon Aurora first. You can work through the tutorials in that section to create these kinds of resources in the AWS console, and understand how they fit together.

Because the steps involve switching between the RDS and EC2 services, the examples use AWS CLI commands to help you understand how to automate such operations and save the output.

Before you begin

Before you start setting up a cross-VPC clone, make sure to have the following resources:

Gathering information about the network environment

With cross-VPC cloning, the network environment can differ substantially between the original cluster and its clone. Before you create the clone, collect and record information about the VPC, subnets, DB subnet group, and AZs used in the original cluster. That way, you can minimize the chances of problems. If a network problem does occur, you won’t have to interrupt any troubleshooting activities to search for diagnostic information. The following sections show CLI examples to gather these types of information. You can save the details in whichever format is convenient to consult while creating the clone and doing any troubleshooting.

Step 1: Check the Availability Zones of the original cluster

Before you create the clone, verify which AZs the original cluster uses for its storage. As explained in Amazon Aurora storage and reliability, the storage for each Aurora cluster is associated with exactly three AZs. Because the Amazon Aurora DB clusters takes advantage of the separation of compute and storage, this rule is true regardless of how many instances are in the cluster.

For example, run a CLI command such as the following, substituting your own cluster name for my_cluster. The following example produces a list sorted alphabetically by the AZ name.

aws rds describe-db-clusters \ --db-cluster-identifier my_cluster \ --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' \ --output text

The following example shows sample output from the preceding describe-db-clusters command. It demonstrates that the storage for the Aurora cluster always uses three AZs.

us-east-1c us-east-1d us-east-1e

To create a clone in a network environment that doesn’t have all the resources in place to connect to these AZs, you must create subnets associated with at least two of those AZs, and then create a DB subnet group containing those two or three subnets. The following examples show how.

Step 2: Check the DB subnet group of the original cluster

If you want to use the same number of subnets for the clone as in the original cluster, you can get the number of subnets from the DB subnet group of the original cluster. An Aurora DB subnet group contains at least two subnets, each associated with a different AZ. Make a note of which AZs the subnets are associated with.

The following example shows how to find the DB subnet group of the original cluster, and then work backwards to the corresponding AZs. Substitute the name of your cluster for my_cluster in the first command. Substitute the name of the DB subnet group for my_subnet in the second command.

aws rds describe-db-clusters --db-cluster-identifier my_cluster \ --query '*[].DBSubnetGroup' --output text aws rds describe-db-subnet-groups --db-subnet-group-name my_subnet_group \ --query '*[].Subnets[].[SubnetAvailabilityZone.Name]' --output text

Sample output might look similar to the following, for a cluster with a DB subnet group containing containing two subnets. In this case, two-subnets is a name that was specified when creating the DB subnet group.

two-subnets us-east-1d us-east-1c

For a cluster where the DB subnet group contains three subnets, the output might look similar to the following.

three-subnets us-east-1f us-east-1d us-east-1c

Step 3: Check the subnets of the original cluster

If you need more details about the subnets in the original cluster, run AWS CLI commands similar to the following. You can examine the subnet attributes such as IP address ranges, owner, and so on. That way, you can determine whether to use different subnets in the same VPC, or create subnets with similar characteristics in a different VPC.

Find the subnet IDs of all the subnets that are available in your VPC.

aws ec2 describe-subnets --filters Name=vpc-id,Values=my_vpc \ --query '*[].[SubnetId]' --output text

Find the exact subnets used in your DB subnet group.

aws rds describe-db-subnet-groups --db-subnet-group-name my_subnet_group \ --query '*[].Subnets[].[SubnetIdentifier]' --output text

Then specify the subnets that you want to investigate in a list, as in the following command. Substitute the names of your subnets for my_subnet_1 and so on.

aws ec2 describe-subnets \ --subnet-ids '["my_subnet_1","my_subnet2","my_subnet3"]'

The following example shows partial output from such a describe-subnets command. The output shows some of the important attributes you can see for each subnet, such as its associated AZ and the VPC that it’s part of.

{ 'Subnets': [ { 'AvailabilityZone': 'us-east-1d', 'AvailableIpAddressCount': 54, 'CidrBlock': '10.0.0.64/26', 'State': 'available', 'SubnetId': 'subnet-000a0bca00e0b0000', 'VpcId': 'vpc-3f3c3fc3333b3ffb3', ... }, { 'AvailabilityZone': 'us-east-1c', 'AvailableIpAddressCount': 55, 'CidrBlock': '10.0.0.0/26', 'State': 'available', 'SubnetId': 'subnet-4b4dbfe4d4a4fd4c4', 'VpcId': 'vpc-3f3c3fc3333b3ffb3', ...

Step 4: Check the Availability Zones of the DB instances in the original cluster

You can use this procedure to understand the AZs used for the DB instances in the original cluster. That way, you can set up the exact same AZs for the DB instances in the clone. You can also use more or fewer DB instances in the clone depending on whether the clone is used for production, development and testing, and so on.

For each instance in the original cluster, run a command such as the following. Make sure that the instance has finished creating and is in the Available state first. Substitute the instance identifier for my_instance.

aws rds describe-db-instances --db-instance-identifier my_instance \ --query '*[].AvailabilityZone' --output text

The following example shows the output of running the preceding describe-db-instances command. The Aurora cluster has four database instances. Therefore, we run the command four times, substituting a different DB instance identifier each time. The output shows how those DB instances are spread across a maximum of three AZs.

us-east-1a us-east-1c us-east-1d us-east-1a

After the clone is created and you are adding DB instances to it, you can specify these same AZ names in the create-db-instance commands. You might do so to set up DB instances in the new cluster configured for exactly the same AZs as in the original cluster.

Step 5: Check the VPCs you can use for the clone

If you intend to create the clone in a different VPC than the original, you can get a list of the VPC IDs available for your account. You might also do this step if you need to create any additional subnets in the same VPC as the original cluster. When you run the command to create a subnet, you specify the VPC ID as a parameter.

To list all the VPCs for your account, run the following CLI command:

aws ec2 describe-vpcs --query '*[].[VpcId]' --output text

The following example shows sample output from the preceding describe-vpcs command. The output demonstrates that there are four VPCs in the current AWS account that can be used as the source or the destination for cross-VPC cloning.

vpc-fd111111 vpc-2222e2cd2a222f22e vpc-33333333a33333d33 vpc-4ae4d4de4a4444dad

You can use the same VPC as the destination for the clone, or a different VPC. If the original cluster and the clone are in the same VPC, you can reuse the same DB subnet group for the clone. You can also create a different DB subnet group. For example, the new DB subnet group might use private subnets, while the original cluster’s DB subnet group might use public subnets. If you create the clone in a different VPC, make sure that there are enough subnets in the new VPC and that the subnets are associated with the right AZs from the original cluster.

Creating network resources for the clone

If while collecting the network information you discovered that additional network resources are needed for the clone, you can create those resources before trying to set up the clone. For example, you might need to create more subnets, subnets associated with specific AZs, or a new DB subnet group.

Step 1: Create the subnets for the clone

If you need to create new subnets for the clone, run a command similar to the following. You might need to do this when creating the clone in a different VPC, or when making some other network change such as using private subnets instead of public subnets.

AWS automatically generates the ID of the subnet. Substitute the name of the clone's VPC for my_vpc. Choose the address range for the --cidr-block option to allow at least 16 IP addresses in the range. You can include any other properties that you want to specify. Run the command aws ec2 create-subnet help to see all the choices.

aws ec2 create-subnet --vpc-id my_vpc \ --availability-zone AZ_name --cidr-block IP_range

The following example shows some important attributes of a newly created subnet.

{ 'Subnet': { 'AvailabilityZone': 'us-east-1b', 'AvailableIpAddressCount': 59, 'CidrBlock': '10.0.0.64/26', 'State': 'available', 'SubnetId': 'subnet-44b4a44f4e44db444', 'VpcId': 'vpc-555fc5df555e555dc', ... } }

Step 2: Create the DB subnet group for the clone

If you are creating the clone in a different VPC, or a different set of subnets within the same VPC, then you create a new DB subnet group and specify it when creating the clone.

Make sure that you know all the following details. You can find all of these from the output of the preceding examples.

  1. VPC of the original cluster. For instructions, see Step 3: Check the subnets of the original cluster.

  2. VPC of the clone, if you are creating it in a different VPC. For instructions, see Step 5: Check the VPCs you can use for the clone.

  3. Three AZs associated with the Aurora storage for the original cluster. For instructions, see Step 1: Check the Availability Zones of the original cluster.

  4. Two or three AZs associated with the DB subnet group for the original cluster. For instructions, see Step 2: Check the DB subnet group of the original cluster.

  5. The subnet IDs and associated AZs of all the subnets in the VPC you intend to use for the clone. Use the same describe-subnets command as in Step 3: Check the subnets of the original cluster, substituting the VPC ID of the destination VPC.

Check how many AZs are both associated with the storage of the original cluster, and associated with subnets in the destination VPC. To successfully create the clone, there must be two or three AZs in common. If you have fewer than two AZs in common, go back to Step 1: Create the subnets for the clone. Create one, two, or three new subnets that are associated with the AZs associated with the storage of the original cluster.

Choose subnets in the destination VPC that are associated with the same AZs as the Aurora storage in the originally cluster. Ideally, choose three AZs. Doing so gives you the most flexibility to spread the DB instances of the clone across multiple AZs for high availability of compute resources.

Run a command similar to the following to create the new DB subnet group. Substitute the IDs of your subnets in the list. If you specify the subnet IDs using environment variables, be careful to quote the --subnet-ids parameter list in a way that preserves the double quotation marks around the IDs.

aws rds create-db-subnet-group --db-subnet-group-name my_subnet_group \ --subnet-ids '["my_subnet_1","my_subnet_2","my_subnet3"]' \ --db-subnet-group-description 'DB subnet group with 3 subnets for clone'

The following example shows partial output of the create-db-subnet-group command.

{ 'DBSubnetGroup': { 'DBSubnetGroupName': 'my_subnet_group', 'DBSubnetGroupDescription': 'DB subnet group with 3 subnets for clone', 'VpcId': 'vpc-555fc5df555e555dc', 'SubnetGroupStatus': 'Complete', 'Subnets': [ { 'SubnetIdentifier': 'my_subnet_1', 'SubnetAvailabilityZone': { 'Name': 'us-east-1c' }, 'SubnetStatus': 'Active' }, { 'SubnetIdentifier': 'my_subnet_2', 'SubnetAvailabilityZone': { 'Name': 'us-east-1d' }, 'SubnetStatus': 'Active' } ... ], 'SupportedNetworkTypes': [ 'IPV4' ] } }

At this point, you haven’t actually created the clone yet. You have created all the relevant VPC and subnet resources so that you can specify the appropriate parameters to the restore-db-cluster-to-point-in-time and create-db-instance commands when creating the clone.

Creating an Aurora clone with new network settings

Once you have made sure that the right configuration of VPCs, subnets, AZs, and subnet groups is in place for the new cluster to use, you can perform the actual cloning operation. The following CLI examples highlight the options such as --db-subnet-group-name, --availability-zone, and --vpc-security-group-ids that you specify on the commands to set up the clone and its DB instances.

Step 1: Specify the DB subnet group for the clone

When you create the clone, you can configure all the right VPC, subnet, and AZ settings by specifying a DB subnet group. Use the commands in the preceding examples to verify all the relationships and mappings that go into the DB subnet group.

For example, the following commands demonstrate cloning an original cluster to a clone. In the first example, the source cluster is associated with two subnets and the clone is associated with three subnets. The second example shows the opposite case, cloning from a cluster with three subnets to a cluster with two subnets.

aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier cluster-with-3-subnets \ --db-cluster-identifier cluster-cloned-to-2-subnets \ --restore-type copy-on-write --use-latest-restorable-time \ --db-subnet-group-name two-subnets

If you intend to use Aurora Serverless v2 instances in the clone, include a --serverless-v2-scaling-configuration option when you create the clone, as shown. Doing so lets you use the db.serverless class when creating DB instances in the clone. You can also modify the clone later to add this scaling configuration attribute. The capacity numbers in this example allow each Serverless v2 instance in the cluster to scale between 2 and 32 Aurora Capacity Units (ACUs). For information about the Aurora Serverless v2 feature and how to choose the capacity range, see Using Aurora Serverless v2.

aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier cluster-with-2-subnets \ --db-cluster-identifier cluster-cloned-to-3-subnets \ --restore-type copy-on-write --use-latest-restorable-time \ --db-subnet-group-name three-subnets \ --serverless-v2-scaling-configuration 'MinCapacity=2,MaxCapacity=32'

Regardless of the number of subnets used by the DB instances, the Aurora storage for the source cluster and the clone is associated with three AZs. The following example lists the AZs associated with both the original cluster and the clone, for both of the restore-db-cluster-to-point-in-time commands in the preceding examples.

aws rds describe-db-clusters --db-cluster-identifier cluster-with-3-subnets \ --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text us-east-1c us-east-1d us-east-1f aws rds describe-db-clusters --db-cluster-identifier cluster-cloned-to-2-subnets \ --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text us-east-1c us-east-1d us-east-1f aws rds describe-db-clusters --db-cluster-identifier cluster-with-2-subnets \ --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text us-east-1a us-east-1c us-east-1d aws rds describe-db-clusters --db-cluster-identifier cluster-cloned-to-3-subnets \ --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text us-east-1a us-east-1c us-east-1d

Because at least two of the AZs overlap between each pair of original and clone clusters, both clusters can access the same underlying Aurora storage.

Step 2: Specify network settings for instances in the clone

When you create DB instances in the clone, by default they inherit the DB subnet group from the cluster itself. That way, Aurora automatically assigns each instance to a particular subnet, and creates it in the AZ that’s associated with the subnet. This choice is convenient, especially for development and test systems, because you don’t have to keep track of the subnet IDs or the AZs while adding new instances to the clone.

As an alternative, you can specify the AZ when you create an Aurora DB instance for the clone. The AZ that you specify must be from the set of AZs that are associated with the clone. If the DB subnet group you use for the clone only contains two subnets, then you can only pick from the AZs associated with those two subnets. This choice offers flexibility and resilience for highly available systems, because you can make sure that the writer instance and the standby reader instance are in different AZs. Or if you add additional readers to the cluster, you can make sure that they are spread across three AZs. That way, even in the rare case of an AZ failure, you still have a writer instance and another reader instance in two other AZs.

The following example adds a provisioned DB instance to a cloned Aurora PostgreSQL cluster that uses a custom DB subnet group.

aws rds create-db-instance --db-cluster-identifier my_aurora_postgresql_clone \ --db-instance-identifier my_postgres_instance \ --db-subnet-group-name my_new_subnet \ --engine aurora-postgresql \ --db-instance-class db.t4g.medium

The following example shows partial output from such a command.

{ 'DBInstanceIdentifier': 'my_postgres_instance', 'DBClusterIdentifier': 'my_aurora_postgresql_clone', 'DBInstanceClass': 'db.t4g.medium', 'DBInstanceStatus': 'creating' ... }

The following example adds an Aurora Serverless v2 DB instance to an Aurora MySQL clone that uses a custom DB subnet group. To be able to use Serverless v2 instances, make sure to specify the --serverless-v2-scaling-configuration option for the restore-db-cluster-to-point-in-time command, as shown in preceding examples.

aws rds create-db-instance --db-cluster-identifier my_aurora_mysql_clone \ --db-instance-identifier my_mysql_instance \ --db-subnet-group-name my_other_new_subnet \ --engine aurora-mysql \ --db-instance-class db.serverless

The following example shows partial output from such a command.

{ 'DBInstanceIdentifier': 'my_mysql_instance', 'DBClusterIdentifier': 'my_aurora_mysql_clone', 'DBInstanceClass': 'db.serverless', 'DBInstanceStatus': 'creating' ... }

Step 3: Establishing connectivity from a client system to a clone

If you are already connecting to an Aurora cluster from a client system, you might want to allow the same type of connectivity to a new clone. For example, you might connect to the original cluster from an Amazon Cloud9 instance or EC2 instance. To allow connections from the same client systems, or new ones that you create in the destination VPC, set up equivalent DB subnet groups and VPC security groups as in the VPC. Then specify the subnet group and security groups when you create the clone.

The following examples set up an Aurora Serverless v2 clone. That configuration is based on the combination of --engine-mode provisioned and --serverless-v2-scaling-configuration when creating the DB cluster, and then --db-instance-class db.serverless when creating each DB instance in the cluster. The provisioned engine mode is the default, so you can omit that option if you prefer.

aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier serverless-sql-postgres\ --db-cluster-identifier serverless-sql-postgres-clone \ --db-subnet-group-name 'default-vpc-1234' \ --vpc-security-group-ids 'sg-4567' \ --serverless-v2-scaling-configuration 'MinCapacity=0.5,MaxCapacity=16' \ --restore-type copy-on-write \ --use-latest-restorable-time

Then, when creating the DB instances in the clone, specify the same --db-subnet-group-name option. Optionally, you can include the --availability-zone option and specify one of the AZs associated with the subnets in that subnet group. That AZ must also be one of the AZs associated with the original cluster.

aws rds create-db-instance \ --db-cluster-identifier serverless-sql-postgres-clone \ --db-instance-identifier serverless-sql-postgres-clone-instance \ --db-instance-class db.serverless \ --db-subnet-group-name 'default-vpc-987zyx654' \ --availability-zone 'us-east-1c' \ --engine aurora-postgresql

Moving a cluster from public subnets to private ones

You can use cloning to migrate a cluster between public and private subnets. You might do this when adding additional layers of security to your application before deploying it to production. For this example, you should already have the private subnets and NAT gateway set up before starting the cloning process with Aurora.

For the steps involving Aurora, you can follow the same general steps as in the preceding examples to Gathering information about the network environment and Creating an Aurora clone with new network settings. The main difference is that even if you have public subnets that map to all the AZs from the original cluster, now you must verify that you have enough private subnets for an Aurora cluster, and that those subnets are associated with all the same AZs that are used for Aurora storage in the original cluster. Similar to other cloning use cases, you can make the DB subnet group for the clone with either three or two private subnets that are associated with the required AZs. However, if you use two private subnets in the DB subnet group, you must have a third private subnet that’s associated with the third AZ used for Aurora storage in the original cluster.

You can consult this checklist to verify that all the requirements are in place to perform this type of cloning operation.

When all the prerequisites are in place, you can pause database activity on the original cluster while you create the clone and switch your application to use it. After the clone is created and you verify that you can connect to it, run your application code, and so on, you can discontinue use of the original cluster.

End-to-end example of creating a cross-VPC clone

Creating a clone in a different VPC than the original uses the same general steps as in the preceding examples. Because the VPC ID is a property of the subnets, you don’t actually specify the VPC ID as a parameter when running any of the RDS CLI commands. The main difference is that you are more likely to need to create new subnets, new subnets mapped to specific AZs, a VPC security group, and a new DB subnet group. That’s especially true if this is the first Aurora cluster that you create in that VPC.

You can consult this checklist to verify that all the requirements are in place to perform this type of cloning operation.

When all the prerequisites are in place, you can pause database activity on the original cluster while you create the clone and switch your application to use it. After the clone is created and you verify that you can connect to it, run your application code, and so on, you can consider whether to keep both the original and clones running, or discontinue use of the original cluster.

The following Linux examples show the sequence of AWS CLI operations to clone an Aurora DB cluster from one VPC to another. Some fields that aren’t relevant to the examples aren’t shown in the command output.

First, we check the IDs of the source and destination VPCs. The descriptive name that you assign to a VPC when you create it is represented as a tag in the VPC metadata.

$ aws ec2 describe-vpcs --query '*[].[VpcId,Tags]' [ [ 'vpc-0f0c0fc0000b0ffb0', [ { 'Key': 'Name', 'Value': 'clone-vpc-source' } ] ], [ 'vpc-9e99d9f99a999bd99', [ { 'Key': 'Name', 'Value': 'clone-vpc-dest' } ] ] ]

The original cluster already exists in the source VPC. To set up the clone using the same set of AZs for the Aurora storage, we check the AZs used by the original cluster.

$ aws rds describe-db-clusters --db-cluster-identifier original-cluster \ --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text us-east-1c us-east-1d us-east-1f

We make sure there are subnets that correspond to the AZs used by the original cluster: us-east-1c, us-east-1d, and us-east-1f.

$ aws ec2 create-subnet --vpc-id vpc-9e99d9f99a999bd99 \ --availability-zone us-east-1c --cidr-block 10.0.0.128/28 { 'Subnet': { 'AvailabilityZone': 'us-east-1c', 'SubnetId': 'subnet-3333a33be3ef3e333', 'VpcId': 'vpc-9e99d9f99a999bd99', } } $ aws ec2 create-subnet --vpc-id vpc-9e99d9f99a999bd99 \ --availability-zone us-east-1d --cidr-block 10.0.0.160/28 { 'Subnet': { 'AvailabilityZone': 'us-east-1d', 'SubnetId': 'subnet-4eeb444cd44b4d444', 'VpcId': 'vpc-9e99d9f99a999bd99', } } $ aws ec2 create-subnet --vpc-id vpc-9e99d9f99a999bd99 \ --availability-zone us-east-1f --cidr-block 10.0.0.224/28 { 'Subnet': { 'AvailabilityZone': 'us-east-1f', 'SubnetId': 'subnet-66eea6666fb66d66c', 'VpcId': 'vpc-9e99d9f99a999bd99', } }

This example confirms that there are subnets that map to the necessary AZs in the destination VPC.

aws ec2 describe-subnets --query 'sort_by(*[] | [?VpcId == `vpc-9e99d9f99a999bd99`] | [].{SubnetId:SubnetId,VpcId:VpcId,AvailabilityZone:AvailabilityZone}, &AvailabilityZone)' --output table --------------------------------------------------------------------------- | DescribeSubnets | +------------------+----------------------------+-------------------------+ | AvailabilityZone | SubnetId | VpcId | +------------------+----------------------------+-------------------------+ | us-east-1a | subnet-000ff0e00000c0aea | vpc-9e99d9f99a999bd99 | | us-east-1b | subnet-1111d111111ca11b1 | vpc-9e99d9f99a999bd99 | | us-east-1c | subnet-3333a33be3ef3e333 | vpc-9e99d9f99a999bd99 | | us-east-1d | subnet-4eeb444cd44b4d444 | vpc-9e99d9f99a999bd99 | | us-east-1f | subnet-66eea6666fb66d66c | vpc-9e99d9f99a999bd99 | +------------------+----------------------------+-------------------------+

Before creating an Aurora DB cluster in the VPC, you must have a DB subnet group with subnets that map to the AZs used for Aurora storage. When you create a regular cluster, you can use any set of three AZs. When you clone an existing cluster, the subnet group must match at least two of the three AZs that it uses for Aurora storage.

$ aws rds create-db-subnet-group \ --db-subnet-group-name subnet-group-in-other-vpc \ --subnet-ids '["subnet-3333a33be3ef3e333","subnet-4eeb444cd44b4d444","subnet-66eea6666fb66d66c"]' \ --db-subnet-group-description 'DB subnet group with 3 subnets: subnet-3333a33be3ef3e333,subnet-4eeb444cd44b4d444,subnet-66eea6666fb66d66c' { 'DBSubnetGroup': { 'DBSubnetGroupName': 'subnet-group-in-other-vpc', 'DBSubnetGroupDescription': 'DB subnet group with 3 subnets: subnet-3333a33be3ef3e333,subnet-4eeb444cd44b4d444,subnet-66eea6666fb66d66c', 'VpcId': 'vpc-9e99d9f99a999bd99', 'SubnetGroupStatus': 'Complete', 'Subnets': [ { 'SubnetIdentifier': 'subnet-4eeb444cd44b4d444', 'SubnetAvailabilityZone': { 'Name': 'us-east-1d' } }, { 'SubnetIdentifier': 'subnet-3333a33be3ef3e333', 'SubnetAvailabilityZone': { 'Name': 'us-east-1c' } }, { 'SubnetIdentifier': 'subnet-66eea6666fb66d66c', 'SubnetAvailabilityZone': { 'Name': 'us-east-1f' } } ] } }

Now the subnets and DB subnet group are in place. The following example shows the restore-db-cluster-to-point-in-time that clones the cluster. The --db-subnet-group-name option associates the clone with the correct set of subnets that map to the correct set of AZs from the original cluster.

$ aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier original-cluster \ --db-cluster-identifier clone-in-other-vpc \ --restore-type copy-on-write --use-latest-restorable-time \ --db-subnet-group-name subnet-group-in-other-vpc { 'DBClusterIdentifier': 'clone-in-other-vpc', 'DBSubnetGroup': 'subnet-group-in-other-vpc', 'Engine': 'aurora-postgresql', 'EngineVersion': '15.4', 'Status': 'creating', 'Endpoint': 'clone-in-other-vpc.cluster-c0abcdef.us-east-1.rds.amazonaws.com' }

The following example confirms that the Aurora storage in the clone uses the same set of AZs as in the original cluster.

$ aws rds describe-db-clusters --db-cluster-identifier clone-in-other-vpc \ --query 'sort_by(*[].AvailabilityZones[].{Zone:@},&Zone)' --output text us-east-1c us-east-1d us-east-1f

At this point, you can create DB instances for the clone. Make sure that the VPC security group associated with each instance allows connections from the IP address ranges you use for the EC2 instances, application servers, and so on that are in the destination VPC.

Cross-account cloning with AWS RAM and Amazon Aurora

By using AWS Resource Access Manager (AWS RAM) with Amazon Aurora, you can share Aurora DB clusters and clones that belong to your AWS account with another AWS account or organization. Such cross-account cloning is much faster than creating and restoring a database snapshot. You can create a clone of one of your Aurora DB clusters and share the clone. Or you can share your Aurora DB cluster with another AWS account and let the account holder create the clone. The approach that you choose depends on your use case.

For example, you might need to regularly share a clone of your financial database with your organization's internal auditing team. In this case, your auditing team has its own AWS account for the applications that it uses. You can give the auditing team's AWS account the permission to access your Aurora DB cluster and clone it as needed.

On the other hand, if an outside vendor audits your financial data you might prefer to create the clone yourself. You then give the outside vendor access to the clone only.

You can also use cross-account cloning to support many of the same use cases for cloning within the same AWS account, such as development and testing. For example, your organization might use different AWS accounts for production, development, testing, and so on. For more information, see Overview of Aurora cloning.

Thus, you might want to share a clone with another AWS account or allow another AWS account to create clones of your Aurora DB clusters. In either case, start by using AWS RAM to create a share object. For complete information about sharing AWS resources between AWS accounts, see the AWS RAM User Guide.

Creating a cross-account clone requires actions from the AWS account that owns the original cluster, and the AWS account that creates the clone. First, the original cluster owner modifies the cluster to allow one or more other accounts to clone it. If any of the accounts is in a different AWS organization, AWS generates a sharing invitation. The other account must accept the invitation before proceeding. Then each authorized account can clone the cluster. Throughout this process, the cluster is identified by its unique Amazon Resource Name (ARN).

As with cloning within the same AWS account, additional storage space is used only if changes are made to the data by the source or the clone. Charges for storage are then applied at that time. If the source cluster is deleted, storage costs are distributed equally among remaining cloned clusters.

Limitations of cross-account cloning

Aurora cross-account cloning has the following limitations:

  • You can't clone an Aurora Serverless v1 cluster across AWS accounts.

  • You can't view or accept invitations to shared resources with the AWS Management Console. Use the AWS CLI, the Amazon RDS API, or the AWS RAM console to view and accept invitations to shared resources.

  • You can create only one new clone from a clone that's been shared with your AWS account.

  • You can't share resources (clones or Aurora DB clusters) that have been shared with your AWS account.

  • You can create a maximum of 15 cross-account clones from any single Aurora DB cluster.

  • Each of the 15 cross-account clones must be owned by a different AWS account. That is, you can only create one cross-account clone of a cluster within any AWS account.

  • After you clone a cluster, the original cluster and its clone are considered to be the same for purposes of enforcing limits on cross-account clones. You can't create cross-account clones of both the original cluster and the cloned cluster within the same AWS account. The total number of cross-account clones for the original cluster and any of its clones can't exceed 15.

  • You can't share an Aurora DB cluster with other AWS accounts unless the cluster is in an ACTIVE state.

  • You can't rename an Aurora DB cluster that's been shared with other AWS accounts.

  • You can't create a cross-account clone of a cluster that is encrypted with the default RDS key.

  • You can't create nonencrypted clones in one AWS account from encrypted Aurora DB clusters that have been shared by another AWS account. The cluster owner must grant permission to access the source cluster's AWS KMS key. However, you can use a different key when you create the clone.

Allowing other AWS accounts to clone your cluster

To allow other AWS accounts to clone a cluster that you own, use AWS RAM to set the sharing permission. Doing so also sends an invitation to each of the other accounts that's in a different AWS organization.

For the procedures to share resources owned by you in the AWS RAM console, see Sharing resources owned by you in the AWS RAM User Guide.

Granting permission to other AWS accounts to clone your cluster

If the cluster that you're sharing is encrypted, you also share the AWS KMS key for the cluster. You can allow AWS Identity and Access Management (IAM) users or roles in one AWS account to use a KMS key in a different account.

To do this, you first add the external account (root user) to the KMS key's key policy through AWS KMS. You don't add the individual users or roles to the key policy, only the external account that owns them. You can only share a KMS key that you create, not the default RDS service key. For information about access control for KMS keys, see Authentication and access control for AWS KMS.

To grant permission to clone your cluster
  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. In the navigation pane, choose Databases.

  3. Choose the DB cluster that you want to share to see its Details page, and choose the Connectivity & security tab.

  4. In the Share DB cluster with other AWS accounts section, enter the numeric account ID for the AWS account that you want to allow to clone this cluster. For account IDs in the same organization, you can begin typing in the box and then choose from the menu.

    Important

    In some cases, you might want an account that is not in the same AWS organization as your account to clone a cluster. In these cases, for security reasons the console doesn't report who owns that account ID or whether the account exists.

    Be careful entering account numbers that are not in the same AWS organization as your AWS account. Immediately verify that you shared with the intended account.

  5. On the confirmation page, verify that the account ID that you specified is correct. Enter share in the confirmation box to confirm.

    On the Details page, an entry appears that shows the specified AWS account ID under Accounts that this DB cluster is shared with. The Status column initially shows a status of Pending.

  6. Contact the owner of the other AWS account, or sign in to that account if you own both of them. Instruct the owner of the other account to accept the sharing invitation and clone the DB cluster, as described following.

To grant permission to clone your cluster
  1. Gather the information for the required parameters. You need the ARN for your cluster and the numeric ID for the other AWS account.

  2. Run the AWS RAM CLI command create-resource-share.

    For Linux, macOS, or Unix:

    aws ram create-resource-share --name descriptive_name \ --region region \ --resource-arns cluster_arn \ --principals other_account_ids

    For Windows:

    aws ram create-resource-share --name descriptive_name ^ --region region ^ --resource-arns cluster_arn ^ --principals other_account_ids

    To include multiple account IDs for the --principals parameter, separate IDs from each other with spaces. To specify whether the permitted account IDs can be outside your AWS organization, include the --allow-external-principals or --no-allow-external-principals parameter for create-resource-share.

To grant permission to clone your cluster
  1. Gather the information for the required parameters. You need the ARN for your cluster and the numeric ID for the other AWS account.

  2. Call the AWS RAM API operation CreateResourceShare, and specify the following values:

    • Specify the account ID for one or more AWS accounts as the principals parameter.

    • Specify the ARN for one or more Aurora DB clusters as the resourceArns parameter.

    • Specify whether the permitted account IDs can be outside your AWS organization by including a Boolean value for the allowExternalPrincipals parameter.

Recreating a cluster that uses the default RDS key

If the encrypted cluster that you plan to share uses the default RDS key, make sure to recreate the cluster. To do this, create a manual snapshot of your DB cluster, use an AWS KMS key, and then restore the cluster to a new cluster. Then share the new cluster. To perform this process, take the following steps.

To recreate an encrypted cluster that uses the default RDS key
  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. Choose Snapshots from the navigation pane.

  3. Choose your snapshot.

  4. For Actions, choose Copy Snapshot, and then choose Enable encryption.

  5. For AWS KMS key, choose the new encryption key that you want to use.

  6. Restore the copied snapshot. To do so, follow the procedure in Restoring from a DB cluster snapshot. The new DB instance uses your new encryption key.

  7. (Optional) Delete the old DB cluster if you no longer need it. To do so, follow the procedure in Deleting a DB cluster snapshot. Before you do, confirm that your new cluster has all necessary data and that your application can access it successfully.

Checking if a cluster that you own is shared with other AWS accounts

You can check if other users have permission to share a cluster. Doing so can help you understand whether the cluster is approaching the limit for the maximum number of cross-account clones.

For the procedures to share resources using the AWS RAM console, see Sharing resources owned by you in the AWS RAM User Guide.

To find out if a cluster that you own is shared with other AWS accounts
  • Call the AWS RAM CLI command list-principals, using your account ID as the resource owner and the ARN of your cluster as the resource ARN. You can see all shares with the following command. The results indicate which AWS accounts are allowed to clone the cluster.

    aws ram list-principals \ --resource-arns your_cluster_arn \ --principals your_aws_id
To find out if a cluster that you own is shared with other AWS accounts
  • Call the AWS RAM API operation ListPrincipals. Use your account ID as the resource owner and the ARN of your cluster as the resource ARN.

Cloning a cluster that is owned by another AWS account

To clone a cluster that's owned by another AWS account, use AWS RAM to get permission to make the clone. After you have the required permission, use the standard procedure for cloning an Aurora cluster.

You can also check whether a cluster that you own is a clone of a cluster owned by a different AWS account.

For the procedures to work with resources owned by others in the AWS RAM console, see Accessing resources shared with you in the AWS RAM User Guide.

Viewing invitations to clone clusters that are owned by other AWS accounts

To work with invitations to clone clusters owned by AWS accounts in other AWS organizations, use the AWS CLI, the AWS RAM console, or the AWS RAM API. Currently, you can't perform this procedure using the Amazon RDS console.

For the procedures to work with invitations in the AWS RAM console, see Accessing resources shared with you in the AWS RAM User Guide.

To see invitations to clone clusters that are owned by other AWS accounts
  1. Run the AWS RAM CLI command get-resource-share-invitations.

    aws ram get-resource-share-invitations --region region_name

    The results from the preceding command show all invitations to clone clusters, including any that you already accepted or rejected.

  2. (Optional) Filter the list so you see only the invitations that require action from you. To do so, add the parameter --query 'resourceShareInvitations[?status==`PENDING`]'.

To see invitations to clone clusters that are owned by other AWS accounts
  1. Call the AWS RAM API operation GetResourceShareInvitations. This operation returns all such invitations, including any that you already accepted or rejected.

  2. (Optional) Find only the invitations that require action from you by checking the resourceShareAssociations return field for a status value of PENDING.

Accepting invitations to share clusters owned by other AWS accounts

You can accept invitations to share clusters owned by other AWS accounts that are in different AWS organizations. To work with these invitations, use the AWS CLI, the AWS RAM and RDS APIs, or the AWS RAM console. Currently, you can't perform this procedure using the RDS console.

For the procedures to work with invitations in the AWS RAM console, see Accessing resources shared with you in the AWS RAM User Guide.

To accept an invitation to share a cluster from another AWS account
  1. Find the invitation ARN by running the AWS RAM CLI command get-resource-share-invitations, as shown preceding.

  2. Accept the invitation by calling the AWS RAM CLI command accept-resource-share-invitation, as shown following.

    For Linux, macOS, or Unix:

    aws ram accept-resource-share-invitation \ --resource-share-invitation-arn invitation_arn \ --region region

    For Windows:

    aws ram accept-resource-share-invitation ^ --resource-share-invitation-arn invitation_arn ^ --region region
To accept invitations to share somebody's cluster
  1. Find the invitation ARN by calling the AWS RAM API operation GetResourceShareInvitations, as shown preceding.

  2. Pass that ARN as the resourceShareInvitationArn parameter to the RDS API operation AcceptResourceShareInvitation.

Cloning an Aurora cluster that is owned by another AWS account

After you accept the invitation from the AWS account that owns the DB cluster, as shown preceding, you can clone the cluster.

To clone an Aurora cluster that is owned by another AWS account
  1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.

  2. In the navigation pane, choose Databases.

    At the top of the database list, you should see one or more items with a Role value of Shared from account #account_id. For security reasons, you can only see limited information about the original clusters. The properties that you can see are the ones such as database engine and version that must be the same in your cloned cluster.

  3. Choose the cluster that you intend to clone.

  4. For Actions, choose Create clone.

  5. Follow the procedure in Console to finish setting up the cloned cluster.

  6. As needed, enable encryption for the cloned cluster. If the cluster that you are cloning is encrypted, you must enable encryption for the cloned cluster. The AWS account that shared the cluster with you must also share the KMS key that was used to encrypt the cluster. You can use the same KMS key to encrypt the clone, or your own KMS key. You can't create a cross-account clone for a cluster that is encrypted with the default KMS key.

    The account that owns the encryption key must grant permission to use the key to the destination account by using a key policy. This process is similar to how encrypted snapshots are shared, by using a key policy that grants permission to the destination account to use the key.

To clone an Aurora cluster owned by another AWS account
  1. Accept the invitation from the AWS account that owns the DB cluster, as shown preceding.

  2. Clone the cluster by specifying the full ARN of the source cluster in the source-db-cluster-identifier parameter of the RDS CLI command restore-db-cluster-to-point-in-time, as shown following.

    If the ARN passed as the source-db-cluster-identifier hasn't been shared, the same error is returned as if the specified cluster doesn't exist.

    For Linux, macOS, or Unix:

    aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier=arn:aws:rds:arn_details \ --db-cluster-identifier=new_cluster_id \ --restore-type=copy-on-write \ --use-latest-restorable-time

    For Windows:

    aws rds restore-db-cluster-to-point-in-time ^ --source-db-cluster-identifier=arn:aws:rds:arn_details ^ --db-cluster-identifier=new_cluster_id ^ --restore-type=copy-on-write ^ --use-latest-restorable-time
  3. If the cluster that you are cloning is encrypted, encrypt your cloned cluster by including a kms-key-id parameter. This kms-key-id value can be the same one used to encrypt the original DB cluster, or your own KMS key. Your account must have permission to use that encryption key.

    For Linux, macOS, or Unix:

    aws rds restore-db-cluster-to-point-in-time \ --source-db-cluster-identifier=arn:aws:rds:arn_details \ --db-cluster-identifier=new_cluster_id \ --restore-type=copy-on-write \ --use-latest-restorable-time \ --kms-key-id=arn:aws:kms:arn_details

    For Windows:

    aws rds restore-db-cluster-to-point-in-time ^ --source-db-cluster-identifier=arn:aws:rds:arn_details ^ --db-cluster-identifier=new_cluster_id ^ --restore-type=copy-on-write ^ --use-latest-restorable-time ^ --kms-key-id=arn:aws:kms:arn_details

    The account that owns the encryption key must grant permission to use the key to the destination account by using a key policy. This process is similar to how encrypted snapshots are shared, by using a key policy that grants permission to the destination account to use the key. An example of a key policy follows.

    { "Id": "key-policy-1", "Version": "2012-10-17", "Statement": [ { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": {"AWS": [ "arn:aws:iam::account_id:user/KeyUser", "arn:aws:iam::account_id:root" ]}, "Action": [ "kms:CreateGrant", "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }, { "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": {"AWS": [ "arn:aws:iam::account_id:user/KeyUser", "arn:aws:iam::account_id:root" ]}, "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": "*", "Condition": {"Bool": {"kms:GrantIsForAWSResource": true}} } ] }
Note

The restore-db-cluster-to-point-in-time AWS CLI command restores only the DB cluster, not the DB instances for that DB cluster. To create DB instances for the restored DB cluster, invoke the create-db-instance command. Specify the identifier of the restored DB cluster in --db-cluster-identifier.

You can create DB instances only after the restore-db-cluster-to-point-in-time command has completed and the DB cluster is available.

To clone an Aurora cluster owned by another AWS account
  1. Accept the invitation from the AWS account that owns the DB cluster, as shown preceding.

  2. Clone the cluster by specifying the full ARN of the source cluster in the SourceDBClusterIdentifier parameter of the RDS API operation RestoreDBClusterToPointInTime.

    If the ARN passed as the SourceDBClusterIdentifier hasn't been shared, then the same error is returned as if the specified cluster doesn't exist.

  3. If the cluster that you are cloning is encrypted, include a KmsKeyId parameter to encrypt your cloned cluster. This kms-key-id value can be the same one used to encrypt the original DB cluster, or your own KMS key. Your account must have permission to use that encryption key.

    When you clone a volume, the destination account must have permission to use the encryption key used to encrypt the source cluster. Aurora encrypts the new cloned cluster with the encryption key specified in KmsKeyId.

    The account that owns the encryption key must grant permission to use the key to the destination account by using a key policy. This process is similar to how encrypted snapshots are shared, by using a key policy that grants permission to the destination account to use the key. An example of a key policy follows.

    { "Id": "key-policy-1", "Version": "2012-10-17", "Statement": [ { "Sid": "Allow use of the key", "Effect": "Allow", "Principal": {"AWS": [ "arn:aws:iam::account_id:user/KeyUser", "arn:aws:iam::account_id:root" ]}, "Action": [ "kms:CreateGrant", "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }, { "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": {"AWS": [ "arn:aws:iam::account_id:user/KeyUser", "arn:aws:iam::account_id:root" ]}, "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": "*", "Condition": {"Bool": {"kms:GrantIsForAWSResource": true}} } ] }
Note

The RestoreDBClusterToPointInTime RDS API operation restores only the DB cluster, not the DB instances for that DB cluster. To create DB instances for the restored DB cluster, invoke the CreateDBInstance RDS API operation. Specify the identifier of the restored DB cluster in DBClusterIdentifier. You can create DB instances only after the RestoreDBClusterToPointInTime operation has completed and the DB cluster is available.

Checking if a DB cluster is a cross-account clone

The DBClusters object identifies whether each cluster is a cross-account clone. You can see the clusters that you have permission to clone by using the include-shared option when you run the RDS CLI command describe-db-clusters. However, you can't see most of the configuration details for such clusters.

To check if a DB cluster is a cross-account clone
  • Call the RDS CLI command describe-db-clusters.

    The following example shows how actual or potential cross-account clone DB clusters appear in describe-db-clusters output. For existing clusters owned by your AWS account, the CrossAccountClone field indicates whether the cluster is a clone of a DB cluster that is owned by another AWS account.

    In some cases, an entry might have a different AWS account number than yours in the DBClusterArn field. In this case, that entry represents a cluster that is owned by a different AWS account and that you can clone. Such entries have few fields other than DBClusterArn. When creating the cloned cluster, specify the same StorageEncrypted, Engine, and EngineVersion values as in the original cluster.

    $aws rds describe-db-clusters --include-shared --region us-east-1 { "DBClusters": [ { "EarliestRestorableTime": "2023-02-01T21:17:54.106Z", "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.02.0", "CrossAccountClone": false, ... }, { "EarliestRestorableTime": "2023-02-09T16:01:07.398Z", "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.02.0", "CrossAccountClone": true, ... }, { "StorageEncrypted": false, "DBClusterArn": "arn:aws:rds:us-east-1:12345678:cluster:cluster-abcdefgh", "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.02.0 ] }
To check if a DB cluster is a cross-account clone
  • Call the RDS API operation DescribeDBClusters.

    For existing clusters owned by your AWS account, the CrossAccountClone field indicates whether the cluster is a clone of a DB cluster owned by another AWS account. Entries with a different AWS account number in the DBClusterArn field represent clusters that you can clone and that are owned by other AWS accounts. These entries have few fields other than DBClusterArn. When creating the cloned cluster, specify the same StorageEncrypted, Engine, and EngineVersion values as in the original cluster.

    The following example shows a return value that demonstrates both actual and potential cloned clusters.

    { "DBClusters": [ { "EarliestRestorableTime": "2023-02-01T21:17:54.106Z", "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.02.0", "CrossAccountClone": false, ... }, { "EarliestRestorableTime": "2023-02-09T16:01:07.398Z", "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.02.0", "CrossAccountClone": true, ... }, { "StorageEncrypted": false, "DBClusterArn": "arn:aws:rds:us-east-1:12345678:cluster:cluster-abcdefgh", "Engine": "aurora-mysql", "EngineVersion": "8.0.mysql_aurora.3.02.0" } ] }