

# Relocate
<a name="migration-relocate-pattern-list"></a>

**Topics**
+ [Migrate an Amazon RDS for Oracle database to another AWS account and AWS Region using AWS DMS for ongoing replication](migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication.md)
+ [Migrate an Amazon RDS DB instance to another VPC or account](migrate-an-amazon-rds-db-instance-to-another-vpc-or-account.md)
+ [Migrate an Amazon Redshift cluster to an AWS Region in China](migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china.md)
+ [Transport PostgreSQL databases between two Amazon RDS DB instances using pg\$1transport](transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport.md)

# Migrate an Amazon RDS for Oracle database to another AWS account and AWS Region using AWS DMS for ongoing replication
<a name="migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication"></a>

*Durga Prasad Cheepuri and Eduardo Valentim, Amazon Web Services*

## Summary
<a name="migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication-summary"></a>


| 
| 
| Warning: IAM users have long-term credentials, which presents a security risk. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed. | 
| --- |

This pattern walks you through the steps for migrating an Amazon Relational Database Service (Amazon RDS) for Oracle source database to a different AWS account and AWS Region. The pattern uses a DB snapshot for a one-time full data load, and enables AWS Database Migration Service (AWS DMS) for ongoing replication.

## Prerequisites and limitations
<a name="migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication-prereqs"></a>

**Prerequisites**
+ An active AWS account that contains the source Amazon RDS for Oracle database, which has been encrypted using a non-default AWS Key Management Service (AWS KMS) key
+ An active AWS account in a different AWS Region from the source database, to use for the target Amazon RDS for Oracle database
+ Virtual private cloud (VPC) peering between the source and target VPCs
+ Familiarity with [using an Oracle database as a source for AWS DMS](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ Familiarity with [using an Oracle database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html) 

**Product versions**
+ Oracle versions 11g (versions 11.2.0.3.v1 and later) and up to 12.2, and 18c. For the latest list of supported versions and editions, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) and with [Using an Oracle database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html)  in the AWS documentation. For Oracle versions supported by Amazon RDS, see [Oracle on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html). 

## Architecture
<a name="migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication-architecture"></a>

**Source and target technology stacks**
+ Amazon RDS for Oracle DB instance

![\[Source AWS account connecting to target AWS account that contains source and target Regions\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5ecd5359-884e-455c-b5d0-ef08eda2ea1f/images/e17fa7fe-d924-4f35-9707-b93572fa1227.png)


**Ongoing replication architecture**

![\[DB on an EC2 instance connecting through VPC peering to a replication instance and Amazon RDS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5ecd5359-884e-455c-b5d0-ef08eda2ea1f/images/b60b3500-5d29-487a-bbab-0ae9f3f386aa.png)


## Tools
<a name="migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication-tools"></a>

**Tools used for one-time full data load**
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. When you create a DB snapshot, you need to identify which DB instance you are going to back up, and then give your DB snapshot a name so you can restore from it later. The amount of time it takes to create a snapshot varies with the size of your databases. Because the snapshot includes the entire storage volume, the size of files, such as temporary files, also affects the amount of time it takes to create the snapshot. For more information about using DB snapshots, see [Creating a DB Snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html) in the Amazon RDS documentation. 
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) creates a** **key for** **Amazon RDS**  **encryption. When you create an encrypted DB instance, you can also supply the [AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) key identifier for your encryption key. If you don't specify an [AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) key identifier, Amazon RDS uses your default encryption key for your new DB instance. [AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) creates your default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region. For this pattern, the Amazon RDS DB instance should be encrypted using the non-default [AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) key. For more information about using [AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) keys for Amazon RDS encryption, see [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) in the Amazon RDS documentation.

**Tools used for ongoing replication**
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) is used to replicate ongoing changes and to keep the source and target databases in sync. For more information about using AWS DMS for ongoing replication, see [Working with an AWS DMS replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html) in the AWS DMS documentation. 

## Epics
<a name="migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication-epics"></a>

### Configure your source AWS account
<a name="configure-your-source-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the source Oracle DB instance. | Let the Amazon RDS for Oracle DB instance run in ARCHIVELOG mode, and set the retention period. For details, see [Working with an AWS managed Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Amazon-Managed). | DBA | 
| Set supplemental logging for the source Oracle DB instance. | Set database-level and table-level supplemental logging for the Amazon RDS for Oracle DB instance. For details, see [Working with an AWS managed Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Amazon-Managed). | DBA | 
| Update the AWS KMS key policy in the source account. | Update the AWS KMS key policy in the source AWS account to allow the target AWS account to use the encrypted Amazon RDS AWS KMS key. For details, see the [AWS KMS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-external-accounts). | SysAdmin | 
| Create a manual Amazon RDS DB snapshot of the source DB instance. |  | AWS IAM user | 
| Share the manual, encrypted Amazon RDS snapshot with the target AWS account. | For details, see [Sharing a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html). | AWS IAM user | 

### Configure your target AWS account
<a name="configure-your-target-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Attach a policy. | In the target AWS account, attach an AWS Identity and Access Management (IAM) policy to the root IAM user, to allow the IAM user to copy an encrypted DB snapshot using the shared AWS KMS key. | SysAdmin | 
| Switch to the source AWS Region. |  | AWS IAM user | 
| Copy the shared snapshot. | In the Amazon RDS console, in the **Snapshots** pane, choose **Shared with Me**, and select the shared snapshot. Copy the snapshot to the same AWS Region as the source database by using the Amazon Resource Name (ARN) for the AWS KMS key used by the source database. For details, see [Copying a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html). | AWS IAM user | 
| Switch to the target AWS Region, and create a new AWS KMS key. |  | AWS IAM user | 
| Copy the snapshot. | Switch to the source AWS Region. On the Amazon RDS console, in the **Snapshots** pane, choose **Owned by Me**, and select the copied snapshot. Copy the snapshot to the target AWS Region by using the AWS KMS key for the new target AWS Region. | AWS IAM user | 
| Restore the snapshot. | Switch to the target AWS Region. On the Amazon RDS console, in the **Snapshots** pane, choose **Owned by Me**. Select the copied snapshot and restore it to an Amazon RDS for Oracle DB instance. For details, see [Restoring from a DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html). | AWS IAM user | 

### Prepare your source database for ongoing replication
<a name="prepare-your-source-database-for-ongoing-replication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Oracle user with the appropriate permissions. | Create an Oracle user with the required privileges for Oracle as a source for AWS DMS. For details, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html). | DBA | 
| Configure the source database for Oracle LogMiner or Oracle Binary Reader. |  | DBA | 

### Prepare your target database for ongoing replication
<a name="prepare-your-target-database-for-ongoing-replication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Oracle user with the appropriate permissions. | Create an Oracle user with the required privileges for Oracle as a target for AWS DMS. For details, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.Privileges). | DBA | 

### Create AWS DMS components
<a name="create-dms-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a replication instance in the target AWS Region. | Create a replication instance in the VPC of the target AWS Region. For details, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html#CHAP_GettingStarted.ReplicationInstance). | AWS IAM user | 
| Create source and target endpoints with required encryption, and test connections. | For details, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html#CHAP_GettingStarted.Endpoints). | DBA | 
| Create replication tasks. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication.html)For details, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html#CHAP_GettingStarted.Tasks). | IAM user | 
| Start the tasks and monitor them. | For details, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html). | AWS IAM user | 
| Enable validation on the task if needed. | Note that enabling validation does have a performance impact on the replication. For details, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html). | AWS IAM user | 

## Related resources
<a name="migrate-an-amazon-rds-for-oracle-database-to-another-aws-account-and-aws-region-using-aws-dms-for-ongoing-replication-resources"></a>
+ [Changing a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-external-accounts)
+ [Creating a manual Amazon RDS DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html)
+ [Sharing a manual Amazon RDS DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html)
+ [Copying a snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html) 
+ [Restoring from an Amazon RDS DB snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html) 
+ [Getting started with AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html) 
+ [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) 
+ [Using an Oracle database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html) 
+ [AWS DMS setup using VPC peering](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html#CHAP_ReplicationInstance.VPC.Configurations.ScenarioVPCPeer) 
+ [How do I share manual Amazon RDS DB snapshots or DB cluster snapshots with another AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-snapshots-share-account/) (AWS Knowledge Center article) 

# Migrate an Amazon RDS DB instance to another VPC or account
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account"></a>

*Dhrubajyoti Mukherjee, Amazon Web Services*

## Summary
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-summary"></a>

This pattern provides guidance for migrating an Amazon Relational Database Service (Amazon RDS) DB instance from one virtual private cloud (VPC) to another in the same AWS account, or from one AWS account to another AWS account.

This pattern is useful if you want to migrate your Amazon RDS DB instances to another VPC or account for separation or security reasons (for example, when you want to place your application stack and database in different VPCs). 

Migrating a DB instance to another AWS account involves steps such as taking a manual snapshot, sharing it, and restoring the snapshot in the target account. This process can be time-consuming, depending on database changes and transaction rates. It also causes database downtime, so plan ahead for the migration. Consider a blue/green deployment strategy to minimize downtime. Alternatively, you can evaluate AWS Data Migration Service (AWS DMS) to minimize downtime for the change. However, this pattern doesn’t cover this option. To learn more, see the [AWS DMS documentation.](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)

## Prerequisites and limitations
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ AWS Identity and Access Management (IAM) permissions required for the VPC, subnets, and Amazon RDS console

**Limitations**
+ Changes to a VPC cause a database reboot, resulting in application outages. We recommend that you migrate during low peak times.
+ Limitations when migrating Amazon RDS to another VPC:
  + The DB instance you’re migrating must be a single instance with no standby. It must not be a member of a cluster.
  + Amazon RDS must not be in multiple Availability Zones.
  + Amazon RDS must not have any read replicas.
  + The subnet group created in the target VPC must have subnets from the Availability Zone where the source database is running.
+ Limitations when migrating Amazon RDS to another AWS account:
  + Sharing snapshots encrypted with the default service key for Amazon RDS isn‘t currently supported.

## Architecture
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-architecture"></a>

**Migrating to a VPC in the same AWS account**

The following diagram shows the workflow for migrating an Amazon RDS DB instance to a different VPC in the same AWS account.

![\[Workflow for migrating an Amazon RDS DB instance to a different VPC in the same AWS account\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/dabcee69-9cc6-47f9-9964-635e349caaaf/images/73e16544-6276-4f03-9ae2-42b8c7c20315.png)


The steps consist of the following. See the [Epics](#migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-epics) section for detailed instructions.

1. Create a DB subnet group in the target VPC. A DB subnet group is a collection of subnets that you can use to specify a specific VPC when you create DB instances.

1. Configure the Amazon RDS DB instance in the source VPC to use the new DB subnet group.

1. Apply the changes to migrate the Amazon RDS DB to the target VPC.

**Migrating to a different AWS account**

The following diagram shows the workflow for migrating an Amazon RDS DB instance to a different AWS account.

![\[Workflow for migrating an Amazon RDS DB instance to a different AWS account\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/dabcee69-9cc6-47f9-9964-635e349caaaf/images/5536e69e-3965-4ca2-8a0b-2573659b5f8f.png)


The steps consist of the following. See the [Epics](#migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-epics) section for detailed instructions.

1. Access the Amazon RDS DB instance in the source AWS account.

1. Create an Amazon RDS snapshot in the source AWS account.

1. Share the Amazon RDS snapshot with the target AWS account.

1. Access the Amazon RDS snapshot in the target AWS account.

1. Create an Amazon RDS DB instance in the target AWS account.

## Tools
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-tools"></a>

**AWS services**
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Best practices
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-best-practices"></a>
+ If database downtime is a concern when migrating an Amazon RDS DB instance to another account, we recommend that you use [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html). This service provides data replication, which causes less than five minutes of outage time.

## Epics
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-epics"></a>

### Migrate to a different VPC in the same AWS account
<a name="migrate-to-a-different-vpc-in-the-same-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new VPC. | On the [Amazon VPC console](https://console.aws.amazon.com/vpc/), create a new VPC and subnets with the desired properties and IP address ranges. For detailed instructions, see the [Amazon VPC documentation](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html). | Administrator | 
| Create a DB subnet group. | On the [Amazon RDS console](https://console.aws.amazon.com/rds/):[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-amazon-rds-db-instance-to-another-vpc-or-account.html)For additional information, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.CreateDBSubnetGroup). | Administrator | 
| Modify the Amazon RDS DB instance to use the new subnet group. | On the Amazon RDS console:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-amazon-rds-db-instance-to-another-vpc-or-account.html)When the migration to the target VPC is complete, the target VPC's default security group is assigned to the Amazon RDS DB instance. You can configure a new security group for that VPC with the required inbound and outbound rules to your DB instance.Alternatively, use the AWS Command Line Interface (AWS CLI) to perform the migration to the target VPC by explicitly providing the new VPC security group ID. For example:<pre>aws rds modify-db-instance \<br />    --db-instance-identifier testrds \<br />    --db-subnet-group-name new-vpc-subnet-group \<br />    --vpc-security-group-ids sg-idxxxx \<br />    --apply-immediately</pre> | Administrator | 

### Migrate to a different AWS account
<a name="migrate-to-a-different-aws-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new VPC and subnet group in the target AWS account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-amazon-rds-db-instance-to-another-vpc-or-account.html) | Administrator | 
| Share a manual snapshot of the database and share it with the target account. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-amazon-rds-db-instance-to-another-vpc-or-account.html) | Administrator | 
| Launch a new Amazon RDS DB instance. | Launch a new Amazon RDS DB instance from the shared snapshot in the target AWS account. For instructions, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html). | Administrator | 

## Related resources
<a name="migrate-an-amazon-rds-db-instance-to-another-vpc-or-account-resources"></a>
+ [Amazon VPC documentation](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
+ [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html)
+ [How do I change the VPC for an RDS DB instance?](https://aws.amazon.com/premiumsupport/knowledge-center/change-vpc-rds-db-instance/) (AWS re:Post article)
+ [How do I transfer ownership of Amazon RDS resources to a different AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/account-transfer-rds/) (AWS re:Post article)
+ [How do I share manual Amazon RDS DB snapshots or Aurora DB cluster snapshots with another AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-snapshots-share-account/) (AWS re:Post article)
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)

# Migrate an Amazon Redshift cluster to an AWS Region in China
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china"></a>

*Jing Yan, Amazon Web Services*

## Summary
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china-summary"></a>

This pattern provides a step-by-step approach to migrate an Amazon Redshift cluster to an AWS Region in China from another AWS Region.

This pattern uses SQL commands to recreate all the database objects, and uses the UNLOAD command to move this data from Amazon Redshift to an Amazon Simple Storage Service (Amazon S3) bucket in the source Region. The data is then migrated to an S3 bucket in the AWS Region in China. The COPY command is used to load data from the S3 bucket and transfer it to the target Amazon Redshift cluster.

Amazon Redshift doesn't currently support cross-Region features such as snapshot copying to AWS Regions in China. This pattern provides a way to work around that limitation. You can also reverse the steps in this pattern to migrate data from an AWS Region in China to another AWS Region.

## Prerequisites and limitations
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china-prereqs"></a>

*Prerequisites*
+ Active AWS accounts in both a China Region and an AWS Region outside China
+ Existing Amazon Redshift clusters in both a China Region and an AWS Region outside China

*Limitations*
+ This is an offline migration, which means the source Amazon Redshift cluster cannot perform write operations during the migration.

## Architecture
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china-architecture"></a>

**Source technology stack  **
+ Amazon Redshift cluster in an AWS Region outside China

**Target technology stack  **
+ Amazon Redshift cluster in an AWS Region in China

**Target architecture **

![\[Migration of Amazon Redshift cluster data in S3 bucket in an AWS Region to bucket in a China Region.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f7d241d9-b700-406b-95a0-3e47e7f0fa60/images/b6016e3d-76db-4176-8f99-f804da94d3f2.png)


## Tools
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china-tools"></a>

**Tools**
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/gsg/GetStartedWithS3.html) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability, data availability, security, and performance. You can use Amazon S3 to store data from Amazon Redshift, and you can copy data from an S3 bucket to Amazon Redshift.
+ [Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html) – Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. 
+ [psql](https://www.postgresql.org/docs/8.4/app-psql.html) – psql is a terminal-based front-end to PostgreSQL. 

## Epics
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china-epics"></a>

### Prepare for migration in the source Region
<a name="prepare-for-migration-in-the-source-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch and configure an EC2 instance in the source Region. | Sign in to the AWS Management Console and open the Amazon Elastic Compute Cloud (Amazon EC2) console. Your current Region is displayed in the navigation bar at the top of the screen. This Region cannot be an AWS Region in China. From the Amazon EC2 console dashboard, choose "Launch instance," and create and configure an EC2 instance. Important: Ensure your EC2 security groups for inbound rules allow unrestricted access to TCP port 22 from your source machine. For instructions on how to launch and configure an EC2 instance, see the "Related resources" section. | DBA, Developer | 
| Install the psql tool. | Download and install PostgreSQL. Amazon Redshift does not provide the psql tool, it is installed with PostgreSQL. For more information about using psql and installing PostgreSQL tools, see the "Related resources" section. | DBA | 
| Record the Amazon Redshift cluster details.  | Open the Amazon Redshift console, and choose "Clusters" in the navigation pane. Then choose the Amazon Redshift cluster name from the list. On the "Properties" tab, in the "Database configurations" section, record the "Database name" and "Port." Open the "Connection details" section and record the "Endpoint," which is in the "endpoint:<port>/<databasename>" format. Important: Ensure your Amazon Redshift security groups for inbound rules allow unrestricted access to TCP port 5439 from your EC2 instance. | DBA | 
| Connect psql to the Amazon Redshift cluster.  | At a command prompt, specify the connection information by running the "psql -h <endpoint> -U <userid> -d <databasename> -p <port>" command. At the psql password prompt, enter the password for the "<userid>" user. You are then connected to the Amazon Redshift cluster and can interactively enter commands. | DBA | 
| Create an S3 bucket.  | Open the Amazon S3 console, and create an S3 bucket to hold the files exported from Amazon Redshift. For instructions on how to create an S3 bucket, see the "Related resources" section. | DBA, AWS General | 
| Create an IAM policy that supports unloading data. | Open the AWS Identity and Access Management (IAM) console and choose "Policies." Choose "Create policy," and choose the "JSON" tab. Copy and paste the IAM policy for unloading data from the "Additional information" section. Important: Replace "s3\$1bucket\$1name" with your S3 bucket’s name. Choose "Review policy," and enter a name and description for the policy. Choose "Create policy." | DBA | 
| Create an IAM role to allow UNLOAD operation for Amazon Redshift. | Open the IAM console and choose "Roles." Choose "Create role," and choose "AWS service" in "Select type of trusted entity." Choose "Redshift" for the service, choose "Redshift – Customizable," and then choose "Next." Choose the "Unload" policy you created earlier, and choose "Next." Enter a "Role name," and choose "Create role." | DBA | 
| Associate IAM role with the Amazon Redshift cluster.  | Open the Amazon Redshift console, and choose "Manage IAM roles." Choose "Available roles" from the dropdown menu and choose the role you created earlier. Choose "Apply changes." When the "Status" for the IAM role on the "Manage IAM roles" shows as "In-sync", you can run the UNLOAD command. | DBA | 
| Stop write operations to the Amazon Redshift cluster. | You must remember to stop all write operations to the source Amazon Redshift cluster until the migration is complete. | DBA | 

### Prepare for migration in the target Region
<a name="prepare-for-migration-in-the-target-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch and configure an EC2 instance in the target Region. | Sign in to the AWS Management Console for a Region in China, either Beijing or Ningxia. From the Amazon EC2 console, choose "Launch instance," and create and configure an EC2 instance. Important: Make sure your Amazon EC2 security groups for inbound rules allow unrestricted access to TCP port 22 from your source machine. For further instructions on how to launch and configure an EC2 instance, see the "Related resources" section. | DBA | 
| Record the Amazon Redshift cluster details.  | Open the Amazon Redshift console, and choose "Clusters" in the navigation pane. Then choose the Amazon Redshift cluster name from the list. On the "Properties" tab, in the "Database configurations" section, record the "Database name" and "Port." Open the "Connection details" section and record the "Endpoint," which is in the "endpoint:<port>/<databasename>" format. Important: Make sure your Amazon Redshift security groups for inbound rules allow unrestricted access to TCP port 5439 from your EC2 instance. | DBA | 
| Connect psql to the Amazon Redshift cluster.  | At a command prompt, specify the connection information by running the "psql -h <endpoint> -U <userid> -d <databasename> -p <port>" command. At the psql password prompt, enter the password for the "<userid>" user. You are then connected to the Amazon Redshift cluster and can interactively enter commands. | DBA | 
| Create an S3 bucket.  | Open the Amazon S3 console, and create an S3 bucket to hold the exported files from Amazon Redshift. For help with this and other stories, see the "Related resources" section. | DBA | 
| Create an IAM policy that supports copying data. | Open the IAM console and choose "Policies." Choose "Create policy," and choose the "JSON" tab. Copy and paste the IAM policy for copying data from the "Additional information" section. Important: Replace "s3\$1bucket\$1name" with your S3 bucket’s name. Choose "Review policy," enter a name and description for the policy. Choose "Create policy." | DBA | 
| Create an IAM role to allow COPY operation for Amazon Redshift. | Open the IAM console and choose "Roles." Choose "Create role," and choose "AWS service" in "Select type of trusted entity." Choose "Redshift" for the service, choose "Redshift – Customizable," and then choose "Next." Choose the "Copy" policy you created earlier, and choose "Next." Enter a "Role name," and choose "Create role." | DBA | 
| Associate IAM role with the Amazon Redshift cluster.  | Open the Amazon Redshift console, and choose "Manage IAM roles." Choose "Available roles" from the dropdown menu and choose the role you created earlier. Choose "Apply changes." When the "Status" for the IAM role on the "Manage IAM roles" shows as "In-sync", you can run the "COPY" command. | DBA | 

### Verify source data and object information before beginning the migration
<a name="verify-source-data-and-object-information-before-beginning-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the rows in the source Amazon Redshift tables. | Use the scripts in the "Additional information" section to verify and record the number of rows in the source Amazon Redshift tables. Remember to split the data evenly for the UNLOAD and COPY scripts. This will improve the data unloading and loading efficiency, because the data quantity covered by each script will be balanced. | DBA | 
| Verify the number of database objects in the source Amazon Redshift cluster. | Use the scripts in the "Additional information" section to verify and record the number of databases, users, schemas, tables, views, and user-defined functions (UDFs) in your source Amazon Redshift cluster. | DBA | 
| Verify SQL statement results before migration. | Some SQL statements for data validation should be sorted according to actual business and data situations. This is to verify the imported data to ensure it is consistent and displayed correctly. | DBA | 

### Migrate data and objects to the target Region
<a name="migrate-data-and-objects-to-the-target-region"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate Amazon Redshift DDL scripts.  | Generate Data Definition Language (DDL) scripts by using the links from the "SQL statements to query Amazon Redshift" section in the "Additional information" section. These DDL scripts should include the "create user," "create schema," "privileges on schema to user," "create table/view," "privileges on objects to user," and "create function" queries. | DBA | 
| Create objects in the Amazon Redshift cluster for the target Region. | Run the DDL scripts by using the AWS Command Line Interface (AWS CLI) in the AWS Region in China. These scripts will create objects in the Amazon Redshift cluster for the target Region. | DBA | 
| Unload source Amazon Redshift cluster data to the S3 bucket. | Run the UNLOAD command to unload data from the Amazon Redshift cluster in the source Region to the S3 bucket. | DBA, Developer  | 
| Transfer source Region S3 bucket data to target Region S3 bucket. | Transfer the data from your source Region S3 bucket to the target S3 bucket. Because the "\$1 aws s3 sync" command cannot be used, make sure you use the process outlined in the "Transferring Amazon S3 data from AWS Regions to AWS Regions in China" article in the "Related resources" section. | Developer | 
|  Load data into the target Amazon Redshift cluster.  | In the psql tool for your target Region, run the COPY command to load data from the S3 bucket to the target Amazon Redshift cluster. | DBA | 

### Verify the data in the source and target Regions after the migration
<a name="verify-the-data-in-the-source-and-target-regions-after-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify and compare the number of rows in the source and target tables. | Verify and compare the number of table rows in the source and target Regions to ensure all are migrated. | DBA | 
| Verify and compare the number of source and target database objects. | Verify and compare all database objects in the source and target Regions to ensure all are migrated. | DBA | 
| Verify and compare SQL script results in the source and target Regions. | Run the SQL scripts prepared before the migration. Verify and compare the data to ensure that the SQL results are correct. | DBA | 
| Reset the passwords of all users in the target Amazon Redshift cluster.  | After the migration is complete and all data is verified, you should reset all user passwords for the Amazon Redshift cluster in the AWS Region in China. | DBA | 

## Related resources
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china-resources"></a>
+ [Transferring Amazon S3 data from AWS Regions to AWS Regions in China](https://aws.amazon.com/cn/blogs/storage/transferring-amazon-s3-data-from-aws-regions-to-aws-regions-in-china/)
+ [Creating an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html)
+ [Resetting an Amazon Redshift user password](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_USER.html)
+ [psql documentation](https://www.postgresql.org/docs/8.4/static/app-psql.html)

## Additional information
<a name="migrate-an-amazon-redshift-cluster-to-an-aws-region-in-china-additional"></a>

*IAM policy for unloading data*

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::s3_bucket_name"]
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:DeleteObject"],
      "Resource": ["arn:aws:s3:::s3_bucket_name/*"]
    }
  ]
}
```

*IAM policy for copying data*

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::s3_bucket_name"]
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject"],
      "Resource": ["arn:aws:s3:::s3_bucket_name/*"]
    }
  ]
}
```

*SQL statements to query Amazon Redshift*

```
##Database

select * from pg_database where datdba>1;

##User

select * from pg_user where usesysid>1;

##Schema

SELECT n.nspname AS "Name",

  pg_catalog.pg_get_userbyid(n.nspowner) AS "Owner"

FROM pg_catalog.pg_namespace n

WHERE n.nspname !~ '^pg_' AND n.nspname <> 'information_schema'

ORDER BY 1;

##Table

select count(*) from pg_tables where schemaname not in ('pg_catalog','information_schema');

select schemaname,count(*) from pg_tables where schemaname not in ('pg_catalog','information_schema') group by schemaname order by 1;

##View

SELECT 

    n.nspname AS schemaname,c.relname AS viewname,pg_catalog.pg_get_userbyid(c.relowner) as "Owner"

FROM 

    pg_catalog.pg_class AS c

INNER JOIN

    pg_catalog.pg_namespace AS n

    ON c.relnamespace = n.oid

WHERE relkind = 'v' and n.nspname not in ('information_schema','pg_catalog');

##UDF

SELECT 

   n.nspname AS schemaname,

   p.proname AS proname,

   pg_catalog.pg_get_userbyid(p.proowner) as "Owner"

FROM pg_proc p

LEFT JOIN pg_namespace n on n.oid = p.pronamespace

WHERE p.proowner != 1;
```

*SQL scripts to generate DDL statements*
+ [Get\$1schema\$1priv\$1by\$1user script](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_get_schema_priv_by_user.sql)
+ [Generate\$1tbl\$1ddl script](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_generate_tbl_ddl.sql)
+ [Generate\$1view\$1ddl](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_generate_view_ddl.sql)
+ [Generate\$1user\$1grant\$1revoke\$1ddl](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_generate_user_grant_revoke_ddl.sql)
+ [Generate\$1udf\$1ddl](https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminViews/v_generate_udf_ddl.sql)

# Transport PostgreSQL databases between two Amazon RDS DB instances using pg\$1transport
<a name="transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport"></a>

*Raunak Rishabh and Jitender Kumar, Amazon Web Services*

## Summary
<a name="transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport-summary"></a>

This pattern describes the steps for migrating extremely large databases between two Amazon Relational Database Service (Amazon RDS) for PostgreSQL DB instances by using the **pg\$1transport** extension. This extension provides a physical transport mechanism to move each database. By streaming the database files with minimal processing, it provides an extremely fast method for migrating large databases between DB instances with minimal downtime. This extension uses a pull model where the target DB instance imports the database from the source DB instance.

## Prerequisites and limitations
<a name="transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport-prereqs"></a>

**Prerequisites **
+ Both DB instances must run the same major version of PostgreSQL.
+ The database must not exist on the target. Otherwise, the transport fails.
+ No extension other than **pg\$1transport** must be enabled in the source database.
+ All source database objects must be in the default **pg\$1default** tablespace.
+ The security group of the source DB instance should allow traffic from the target DB instance.
+ Install a PostgreSQL client like [psql](https://www.postgresql.org/docs/11/app-psql.html) or [PgAdmin](https://www.pgadmin.org/) to work with the Amazon RDS PostgreSQL DB instance. You can install the client either in your local system or use an Amazon Elastic Compute Cloud (Amazon EC2) instance. In this pattern, we use psql on an EC2 instance.

**Limitations **
+ You can't transport databases between different major versions of Amazon RDS for PostgreSQL.
+ The access privileges and ownership from the source database are not transferred to the target database.
+ You can't transport databases on read replicas or on parent instances of read replicas.
+ You can't use **reg** data types in any database tables that you plan to transport with this method.
+ You can run up to 32 total transports (including both imports and exports) at the same time on a DB instance.
+ You cannot rename or include/exclude tables. Everything is migrated as is.

**Caution**
+ Make backups before removing the extension, because removing the extension also removes dependent objects and some data that's critical to the operation of the database.
+ Consider the instance class and processes running on other databases on the source instance when you determine the number of workers and `work_mem` values for **pg\$1transport**.
+ When the transport starts, all connections on the source database are ended and the database is put into read-only mode.

**Note**  
When the transport is running on one database, it doesn’t affect other databases on the same server.** **

**Product versions**
+ Amazon RDS for PostgreSQL 10.10 and later, and Amazon RDS for PostgreSQL 11.5 and later. For the latest version information, see [Transporting PostgreSQL Databases Between DB Instances](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.TransportableDB.html) in the Amazon RDS documentation.

## Architecture
<a name="transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport-architecture"></a>

![\[Transporting PostgreSQL databases between Amazon RDS DB instances\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d5fb7ea3-32b7-4602-b382-3cf5c075c7c9/images/aec4d8d2-37a8-4136-9042-f9667ac4aebb.png)


## Tools
<a name="transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport-tools"></a>
+ **pg\$1transport** provides a physical transport mechanism to move each database. By streaming the database files with minimal processing, physical transport moves data much faster than traditional dump and load processes and requires minimal downtime. PostgreSQL transportable databases use a pull model where the destination DB instance imports the database from the source DB instance. You install this extension on your DB instances when you prepare the source and target environments, as explained in this pattern.
+ [psql](https://www.postgresql.org/docs/11/app-psql.html) enables you to connect to, and work with, your PostgreSQL DB instances. To install **psql** on your system, see the [PostgreSQL Downloads](https://www.postgresql.org/download/) page.

## Epics
<a name="transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport-epics"></a>

### Create the target parameter group
<a name="create-the-target-parameter-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a parameter group for the target system. | Specify a group name that identifies it as a target parameter group; for example, `pgtarget-param-group`. For instructions, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating). | DBA | 
| Modify the parameters for the parameter group. | Set the following parameters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport.html)For more information about these parameters, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.TransportableDB.html). | DBA | 

### Create the source parameter group
<a name="create-the-source-parameter-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a parameter group for the source system. | Specify a group name that identifies it as a source parameter group; for example, `pgsource-param-group`. For instructions, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating). | DBA | 
| Modify the parameters for the parameter group. | Set the following parameters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport.html)For more information about these parameters, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.TransportableDB.html). | DBA | 

### Prepare the target environment
<a name="prepare-the-target-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new Amazon RDS for PostgreSQL DB instance to transport your source database to. | Determine the instance class and PostgreSQL version based on your business requirements. | DBA, Systems administrator, Database architect | 
| Modify the security group of the target to allow connections on the DB instance port from the EC2 instance. | By default, the port for the PostgreSQL instance is 5432. If you're using another port, connections to that port must be open for the EC2 instance. | DBA, Systems administrator | 
| Modify the instance, and assign the new target parameter group. | For example, `pgtarget-param-group`. | DBA | 
| Restart the target Amazon RDS DB instance.  | The parameters `shared_preload_libraries` and `max_worker_processes` are static parameters and require a reboot of the instance. | DBA, Systems administrator | 
| Connect to the database from the EC2 instance using psql. | Use the command: <pre>psql -h <rds_end_point> -p PORT -U username -d database -W</pre> | DBA | 
| Create the pg\$1transport extension. | Run the following query as a user with the `rds_superuser` role:<pre>create extension pg_transport;</pre> | DBA | 

### Prepare the source environment
<a name="prepare-the-source-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify the security group of the source to allow connections on the DB instance port from the Amazon EC2 instance and target DB instance | By default, the port for PostgreSQL instance is 5432. If you're using another port, connections to that port must be open for the EC2 instance. | DBA, Systems administrator | 
| Modify the instance and assign the new source parameter group. | For example, `pgsource-param-group`. | DBA | 
| Restart the source Amazon RDS DB instance.  | The parameters `shared_preload_libraries` and `max_worker_processes` are static parameters and require a reboot of the instance. | DBA | 
| Connect to the database from the EC2 instance using psql. | Use the command: <pre>psql -h <rds_end_point> -p PORT -U username -d database -W</pre> | DBA | 
| Create the pg\$1transport extension and remove all other extensions from the databases to be transported. | The transport will fail if there are any extensions other than **pg\$1transport** installed on the source database. This command must by run by a user with the `rds_superuser` role. | DBA | 

### Perform the transport
<a name="perform-the-transport"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Perform a dry run. | Use the `transport.import_from_server` function to perform a dry run first:<pre>SELECT transport.import_from_server( 'source-db-instance-endpoint', source-db-instance-port, 'source-db-instance-user', 'source-user-password', 'source-database-name', 'destination-user-password', 'true');</pre>The last parameter of this function (set to `true`) defines the dry run. This function displays any errors that you would see when you run the main transport. Resolve the errors before you run the main transport.  | DBA | 
| If the dry run is successful, initiate the database transport. | Run the `transport.import_from_server` function to perform the transport. It connects to the source and imports the data. <pre>SELECT transport.import_from_server( 'source-db-instance-endpoint', source-db-instance-port, 'source-db-instance-user', 'source-user-password', 'source-database-name', 'destination-user-password', false);</pre>The last parameter of this function (set to `false`) indicates that this isn’t a dry run. | DBA | 
| Perform post-transport steps. | After the database transport is complete:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport.html) | DBA | 

## Related resources
<a name="transport-postgresql-databases-between-two-amazon-rds-db-instances-using-pg-transport-resources"></a>
+ [Amazon RDS documentation](https://docs.aws.amazon.com/rds/)
+ [pg\$1transport documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html#PostgreSQL.TransportableDB.Setup)
+ [Migrating databases using RDS PostgreSQL Transportable Databases](https://aws.amazon.com/blogs/database/migrating-databases-using-rds-postgresql-transportable-databases/) (blog post)
+ [PostgreSQL downloads](https://www.postgresql.org/download/linux/redhat/)
+ [psql utility](https://www.postgresql.org/docs/11/app-psql.html)
+ [Creating a DB Parameter Group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html#USER_WorkingWithParamGroups.Creating)
+ [Modify Parameters in a DB Parameter Group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html#USER_WorkingWithParamGroups.Modifying)
+ [PostgreSQL downloads](https://www.postgresql.org/download/)