

# Migrating databases to their Amazon RDS equivalents with AWS DMS
<a name="data-migrations"></a>

Homogeneous data migrations in AWS Database Migration Service (AWS DMS) simplify the migration of self-managed, on-premises databases to their Amazon Relational Database Service (Amazon RDS) equivalents. For example, you can use homogeneous data migrations to migrate an on-premises PostgreSQL database to Amazon RDS for PostgreSQL or Aurora PostgreSQL. For homogeneous data migrations, AWS DMS uses native database tools to provide easy and performant like-to-like migrations.

Homogeneous data migrations are serverless, which means that AWS DMS automatically scales the resources that are required for your migration. With homogeneous data migrations, you can migrate data, table partitions, data types, and secondary objects such as functions, stored procedures, and so on.

At a high level, homogeneous data migrations operate with instance profiles, data providers, and migration projects. When you create a migration project with the compatible source and target data providers of the same type, AWS DMS deploys a serverless environment where your data migration runs. Next, AWS DMS connects to the source data provider, reads the source data, dumps the files on the disk, and restores the data using native database tools. For more information about instance profiles, data providers, and migration projects, see [Working with data providers, instance profiles, and migration projects in AWS DMS](migration-projects.md).

For the list of supported source databases, see [Sources for DMS homogeneous data migrations](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.HomogeneousDataMigrations).

For the list of supported target databases, see [Targets for DMS homogeneous data migrations](CHAP_Introduction.Targets.md#CHAP_Introduction.Targets.HomogeneousDataMigrations).

The following diagram illustrates how homogeneous data migrations work.

![\[An architecture diagram of the DMS Homogeneous Data Migrations feature.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/dms-data-migrations-diagram.png)


The following sections provide information about using homogeneous data migrations.

**Topics**
+ [

## Supported AWS Regions
](#data-migrations-supported-regions)
+ [

## Features
](#data-migrations-features)
+ [

## Limitations for homogeneous data migrations
](#data-migrations-limitations)
+ [

# Overview of the homogeneous data migration process in AWS DMS
](dm-getting-started.md)
+ [

# Setting up homogeneous data migrations in AWS DMS
](dm-prerequisites.md)
+ [

# Creating source data providers for homogeneous data migrations in AWS DMS
](dm-data-providers-source.md)
+ [

# Creating and setting a target database to work with AWS DMS schema conversion
](dm-data-providers-target.md)
+ [

# Running homogeneous data migrations in AWS DMS
](dm-migrating-data.md)
+ [

# Troubleshooting for homogeneous data migrations in AWS DMS
](dm-troubleshooting.md)

## Supported AWS Regions
<a name="data-migrations-supported-regions"></a>

You can run homogeneous data migrations in the following AWS Regions.


| Region Name | Region | 
| --- | --- | 
| US East (N. Virginia) | us-east-1 | 
| US East (Ohio) | us-east-2 | 
| US West (N. California) | us-west-1 | 
| US West (Oregon) | us-west-2 | 
| Canada (Central) | ca-central-1 | 
| Canada West (Calgary) | ca-west-1 | 
| South America (São Paulo) | sa-east-1 | 
| Asia Pacific (Tokyo) | ap-northeast-1 | 
| Asia Pacific (Seoul) | ap-northeast-2 | 
| Asia Pacific (Osaka) | ap-northeast-3 | 
| Asia Pacific (Singapore) | ap-southeast-1 | 
| Asia Pacific (Sydney) | ap-southeast-2 | 
| Asia Pacific (Jakarta) | ap-southeast-3 | 
| Asia Pacific (Melbourne) | ap-southeast-4 | 
| Asia Pacific (Hong Kong) | ap-east-1 | 
| Asia Pacific (Mumbai) | ap-south-1 | 
| Asia Pacific (Hyderabad) | ap-south-2 | 
| Europe (Frankfurt) | eu-central-1 | 
| Europe (Zurich) | eu-central-2 | 
| Europe (Stockholm) | eu-north-1 | 
| Europe (Ireland) | eu-west-1 | 
| Europe (London) | eu-west-2 | 
| Europe (Paris) | eu-west-3 | 
| Europe (Milan) | eu-south-1 | 
| Europe (Spain) | eu-south-2 | 
| Middle East (UAE) | me-central-1 | 
| Middle East (Bahrain) | me-south-1 | 
| Israel (Tel Aviv) | il-central-1 | 
| Africa (Cape Town) | af-south-1 | 

## Features
<a name="data-migrations-features"></a>

Homogeneous data migrations provide the following features:
+ AWS DMS automatically manages the compute and storage resources in the AWS Cloud that are required for homogeneous data migrations. AWS DMS deploys these resources in a serverless environment when you start a data migration.
+ AWS DMS uses native database tools to initiate a fully-automated migration between the databases of the same type.
+ You can use homogeneous data migrations to migrate your data as well as the secondary objects such as partitions, functions, stored procedures, and so on.
+ You can run homogeneous data migrations in the following three migration modes: full load, ongoing replication, and full load with ongoing replication.
+ For homogeneous data migrations, you can use on-premises, Amazon EC2, Amazon RDS databases as a source. You can choose Amazon RDS or Amazon Aurora as a migration target for homogeneous data migrations. 
+ Homogeneous data migrations only support target table preparation mode for PostgreSQL, MongoDB, and Amazon DocumentDB migrations. For more information, see [Target table preparation mode](dm-migrating-data-table-prep.md).
+ You can use homogeneous data migrations to migrate your data from a MySQL-based read-replica to Amazon RDS or Aurora instance

## Limitations for homogeneous data migrations
<a name="data-migrations-limitations"></a>

The following limitations apply when you use homogeneous data migrations:
+ The Support for selection rules in AWS DMS homogeneous data migrations depends on the source database engine and migration type. PostgreSQL and MongoDB-compatible sources support selection rules for all migration types, while MySQL sources only support selection rules for the Full Load migration type.
+ Homogeneous data migrations don't provide a built-in tool for data validation.
+ When using homogeneous data migrations with PostgreSQL, AWS DMS migrates views as tables to your target database.
+ Homogeneous data migrations capture schema-level changes during an ongoing data replication only for MySQL engine. For other engines, if you create a new table in your source database, then AWS DMS can't migrate this table. To migrate this new table, restart your data migration.
+ You can't use homogeneous data migrations in AWS DMS to migrate data from a higher database version to a lower database version.
+ Homogeneous data migrations don't support establishing a connection with database instances in VPC secondary CIDR ranges.
+ You can't use the 8081 port for homogeneous migrations from your data providers.
+ Homogeneous data migrations migrate encrypted MySQL databases and tables as unencrypted on the target database. This is because RDS for MySQL does not support encryption using Keyring plugin. For more information, see [MySQL Keyring Plugin not supported documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.KnownIssuesAndLimitations.html#MySQL.Concepts.Limits.KeyRing) in the Amazon RDS User Guide.

# Overview of the homogeneous data migration process in AWS DMS
<a name="dm-getting-started"></a>

You can use homogeneous data migrations in AWS DMS to migrate data between two databases of the same type. Use the following workflow to create and run a data migration.

1. Create the required AWS Identity and Access Management (IAM) policy and role. For more information, see [Creating IAM resources](dm-iam-resources.md).

1. Configure your source and target databases and create database users with the minimum permissions required for homogeneous data migrations in AWS DMS. For more information, see [ Creating source data providers](dm-data-providers-source.md) and [ Create and set a target database](dm-data-providers-target.md). 

1. Store your source and target database credentials in AWS Secrets Manager. For more information, see [Step 1: Create the secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/hardcoded-db-creds.html#hardcoded-db-creds_step2) in the *AWS Secrets Manager User Guide*.

1. Create a subnet group, an instance profile, and data providers in the AWS DMS console. For more information, see [Creating a subnet group](subnet-group.md), [Creating instance profiles](instance-profiles.md), and [ Creating data providers](data-providers-create.md).

1. Create a migration project by using the resources that you created in the previous step. For more information, see [ Creating migration projects](migration-projects-create.md).

1. Create, configure, and start a data migration. For more information, see [Creating a data migration](dm-migrating-data-create.md).

1. After you complete the full load or ongoing replication, you can cut over to start using your new target database.

1. Clean up your resources. Amazon terminates your data migration in your migration project in three days after you complete the migration. However, you need to manually delete such resources as instance profile, data providers, IAM policy and role, and secrets in AWS Secrets Manager.

For more information about homogeneous data migrations in AWS DMS, read the step-by-step migration walkthough for [PostgreSQL to Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/dms/latest/sbs/dm-postgresql.html) migrations.

The following video introduces the homogeneous data migrations user interface and helps you get familiar with this feature.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/HOJfrR6lcuU/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/HOJfrR6lcuU)


# Setting up homogeneous data migrations in AWS DMS
<a name="dm-prerequisites"></a>

To set up homogeneous data migrations in AWS DMS, complete the following prerequisite tasks.

**Topics**
+ [

# Creating required IAM resources for homogeneous data migrations in AWS DMS
](dm-iam-resources.md)
+ [

# Setting up a network for homogeneous data migrations in AWS DMS
](dm-network.md)
+ [

# VPC peering network configurations
](vpc-peering.md)

# Creating required IAM resources for homogeneous data migrations in AWS DMS
<a name="dm-iam-resources"></a>

To run homogeneous data migrations, you must create an IAM policy and an IAM role in your account to interact with other AWS services. In this section, you create these required IAM resources.

**Topics**
+ [

## Creating an IAM policy for homogeneous data migrations in AWS DMS
](#dm-resources-iam-policy)
+ [

## Creating an IAM role for homogeneous data migrations in AWS DMS
](#dm-resources-iam-role)

## Creating an IAM policy for homogeneous data migrations in AWS DMS
<a name="dm-resources-iam-policy"></a>

To access your databases and to migrate data, with AWS DMS, you can create a serverless environment for homogeneous data migrations. Also, AWS DMS stores logs, metrics, and progress for each data migration in Amazon CloudWatch. To create a data migration project, AWS DMS needs access to these services.

In this step, you create an IAM policy that provides AWS DMS with access to Amazon EC2 and CloudWatch resources. Next, create an IAM role and attach this policy.

**To create an IAM policy for homogeneous data migrations in AWS DMS**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. In the **Create policy** page, choose the **JSON** tab.

1. Paste the following JSON into the editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ec2:DescribeVpcs"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "logs:CreateLogGroup"
               ],
               "Resource": "arn:aws:logs:*:*:log-group:dms-data-migration-*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "logs:CreateLogStream",
                   "logs:PutLogEvents"
               ],
               "Resource": "arn:aws:logs:*:*:log-group:dms-data-migration-*:log-stream:dms-data-migration-*"
           },
           {
               "Effect": "Allow",
               "Action": "cloudwatch:PutMetricData",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Choose **Next**.

1. Enter **HomogeneousDataMigrationsPolicy** for **Policy name**, and choose **Create policy**.

## Creating an IAM role for homogeneous data migrations in AWS DMS
<a name="dm-resources-iam-role"></a>

In this step, you create an IAM role that provides AWS DMS with access to AWS Secrets Manager, Amazon EC2, and CloudWatch.

When creating an IAM role, you must also create a `dms-vpc-role`. For more information, see [Creating an IAM role for AWS DMS to manage Amazon VPC](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DMS_migration-IAM.dms-vpc-role.html) in the *Amazon Relational Database Service User Guide*.

**To create an IAM role for homogeneous data migrations in AWS DMS**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. On the **Select trusted entity** page, for **Trusted entity type**, choose **AWS Service**. For **Use cases for other AWS services**, choose **DMS**.

1. Select the **DMS** check box and choose **Next**.

1. On the **Add permissions** page, choose **HomogeneousDataMigrationsPolicy** that you created before.

1. On the **Name, review, and create** page, enter **HomogeneousDataMigrationsRole** for **Role name**, and choose **Create role**.

1. Choose **Update policy**.

# Setting up a network for homogeneous data migrations in AWS DMS
<a name="dm-network"></a>

With AWS DMS, you can create a serverless environment for homogeneous data migrations which uses networking connectivity model that relies on network interfaces. For each data migration, AWS DMS assigns a private IP within one of the subnets defined in the instance profile DMS subnet group. Additionally, a non-static public IP may be assigned if the instance profile is configured for public access. The subnets used in the instance profile should provide access to both source and target hosts, as defined in the data providers. This access can be within the local VPC or established through VPC peering, Direct Connect, VPN, etc.

Also, for ongoing data replication, you must set up interaction between your source and target databases. These configurations depend on the location of your source data provider and your network settings. The following sections provide descriptions of common network configurations.

**Topics**
+ [

## Configuring a network using a single virtual private cloud (VPC)
](#dm-network-one-vpc)
+ [

## Configuring a network using different virtual private clouds (VPCs)
](#dm-network-different-vpc)
+ [

## Using Direct Connect or a VPN to configure a network to a VPC
](#dm-networking_Direct_Connect)
+ [

## Resolving domain endpoints using DNS
](#dm-networking-resolving_endpoints)

## Configuring a network using a single virtual private cloud (VPC)
<a name="dm-network-one-vpc"></a>

In this configuration, AWS DMS connects to your source and target data providers within the private network.

**To configure a network when your source and target data providers are in the same VPC**

1. Create the subnet group in the AWS DMS console with the VPC and subnets that your source and target data providers use. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Also, choose VPC security groups that your source and target data providers use. For more information, see [Creating instance profiles](instance-profiles.md).

1. Ensure that the security group used for the source and target database allows connections from the security group attached to instance profile used by data migration or CIDR block of subnets, specified in replication subnet group.

This configuration doesn't require you to use the public IP address for data migrations.

## Configuring a network using different virtual private clouds (VPCs)
<a name="dm-network-different-vpc"></a>

In this configuration, AWS DMS uses a private network to connect to your source or target data provider. For another data provider, AWS DMS uses a public network. Depending on which data provider you have in the same VPC as your instance profile, choose one of the following configurations.

### To connect through a private network
<a name="dm-network-different-vpc.privatenetwork"></a>

1. Create the subnet group in the AWS DMS console with the VPC and subnets that your source data provider uses. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Also, choose VPC security groups that your source data provider uses. For more information, see [Creating instance profiles](instance-profiles.md).

1. Configure VPC peering connection between source and target database VPCs. For more information see, [Work with VPC peering connections](https://docs.aws.amazon.com/vpc/latest/peering/working-with-vpc-peering.html).

1. Make sure to enable DNS resolution for both directions if you plan to use endpoints instead of private IPs directly. For more information see, [Enable DNS resolution for a VPC peering connection](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-dns.html).

1. Allow access from the CIDR block of source database’s VPC for target database security group. For more information, see [Controlling access with security groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html).

1. Allow access from the CIDR block of target database’s VPC for target database security group. For more information, see [Controlling access with security groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html).

### To connect through a Public network
<a name="dm-network-different-vpc.publicnetwork"></a>

If your database accepts connections from any address:

1. Create the subnet group in the AWS DMS console with the VPC and public subnets. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Set the **Publicly Available** option to **On** for the instance profile.

If you require a persistent public IP address that can be associated to the data migration:

1. Create the subnet group in the AWS DMS console with the VPC and private subnets. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Set the **Publicly Available** option to **Off** for the instance profile.

1. Setup NAT Gateway. For more information see [Work with NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-working-with.html).

1. Setup Routing table for NAT gateway. For more information see [NAT gateway use cases](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-scenarios.html).

1. Allow access from the public IP address of your NAT Gateway in your database security group. For more information, see [Controlling access with security groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html).

## Using Direct Connect or a VPN to configure a network to a VPC
<a name="dm-networking_Direct_Connect"></a>

You can connect remote networks to your VPC through Direct Connect or VPN connections (software or hardware). These options enable you to extend your internal network into AWS Cloud and integrate existing on-premises services such as monitoring, authentication, security, and data systems with your AWS resources. For this configuration, your VPC security group must include a routing rule that directs traffic to a host capable of bridging VPC traffic to your on-premises VPN. This traffic can be designated using either your VPC CIDR range or specific IP addresses. The NAT host must have its own security group configured to allow traffic from your VPC CIDR range or security group into the NAT instance, ensuring seamless communication between your VPC and on-premises infrastructure. For more information, see [step 5](https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html#vpn-create-vpn-connection) for [Get started with AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html#vpn-create-vpn-connection) procedure in the AWS Site-to-Site VPN User Guide.

## Resolving domain endpoints using DNS
<a name="dm-networking-resolving_endpoints"></a>

For DNS resolution in AWS DMS homogeneous migrations, the service primarily uses the Amazon ECS DNS resolver to resolve domain endpoints. If you need additional DNS resolution capabilities, Amazon Route 53 Resolver is available as an alternative solution. For more information, see [Getting started with Route 53 Resolver](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-getting-started.html) in the Amazon Route 53 user guide. For more information, regarding configuring endpoint resolution using your on-premises name server with Amazon Route 53 Resolver, see [Using your own on-premises name server](CHAP_BestPractices.md#CHAP_BestPractices.Rte53DNSResolver).

**Note**  
If your data migration log shows the message "Initiating connection - Networking model: VPC Peering", refer to [VPC peering network configurations](vpc-peering.md) topic.

# VPC peering network configurations
<a name="vpc-peering"></a>

With AWS DMS, you can create a serverless environment for homogeneous data migrations in a virtual private cloud (VPC) based on the Amazon VPC service. When you create your instance profile, you specify the VPC to use. You can use your default VPC for your account and AWS Region, or you can create a new VPC.

For each data migration, AWS DMS establishes a VPC peering connection with the VPC that you use for your instance profile. Next, AWS DMS adds the CIDR block in the security group that is associated with your instance profile. Because AWS DMS attaches a public IP address to your instance profile, all your data migrations that use the same instance profile have the same public IP address. When your data migration stops or fails, AWS DMS deletes the VPC peering connection.

To avoid CIDR block overlapping with the VPC of your instance profile VPC, AWS DMS uses the `/24` prefix from one of the following CIDR blocks: `10.0.0.0/8`, `172.16.0.0/12`, and `192.168.0.0/16`. For example, if you run three data migrations in parallel, AWS DMS uses the following CIDR blocks to establish a VPC peering connection.
+ `192.168.0.0/24` – for the first data migration
+ `192.168.1.0/24` – for the second data migration
+ `192.168.2.0/24` – for the third data migration

You can use different network configurations to set up interaction between your source and target databases with AWS DMS. Also, for ongoing data replication, you must set up interaction between your source and target databases. These configurations depend on the location of your source data provider and your network settings. The following sections provide descriptions of common network configurations.

**Topics**
+ [

## Configuring a network using a single virtual private cloud (VPC)
](#vpc-peering-one-vpc)
+ [

## Configuring a network using different virtual private clouds (VPCs)
](#vpc-peering-different-vpc)
+ [

## Using an on-premises source data provider
](#vpc-peering-on-premesis)
+ [

## Configuring ongoing data replication
](#vpc-peering-ongoing-replication)

## Configuring a network using a single virtual private cloud (VPC)
<a name="vpc-peering-one-vpc"></a>

In this configuration, AWS DMS connects to your source and target data providers within the private network.

**To configure a network when your source and target data providers are in the same VPC**

1. Create the subnet group in the AWS DMS console with the VPC and subnets that your source and target data providers use. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Also, choose VPC security groups that your source and target data providers use. For more information, see [Creating instance profiles](instance-profiles.md).

This configuration doesn't require you to use the public IP address for data migrations.

## Configuring a network using different virtual private clouds (VPCs)
<a name="vpc-peering-different-vpc"></a>

In this configuration, AWS DMS uses a private network to connect to your source or target data provider. For another data provider, AWS DMS uses a public network. Depending on which data provider you have in the same VPC as your instance profile, choose one of the following configurations.

**To configure a private network for your source data provider and a public network for your target data provider**

1. Create the subnet group in the AWS DMS console with the VPC and subnets that your source data provider uses. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Also, choose VPC security groups that your source data provider uses. For more information, see [Creating instance profiles](instance-profiles.md).

1. Open your migration project. On the **Data migrations** tab, choose your data migration. Take a note of the **public IP address** under **Connectivity and security** on the **Details** tab.

1. Allow access from the public IP address of your data migration in your target database security group. For more information, see [Controlling access with security groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html) in the *Amazon Relational Database Service User Guide*.

**To configure a public network for your source data provider and a private network for your target data provider**

1. Create the subnet group in the AWS DMS console with the VPC and subnets that your target data provider uses. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Also, choose VPC security groups that your target data provider uses. For more information, see [Creating instance profiles](instance-profiles.md).

1. Open your migration project. On the **Data migrations** tab, choose your data migration. Take a note of the **public IP address** under **Connectivity and security** on the **Details** tab.

1. Allow access from the public IP address of your data migration in your source database security group. For more information, see [Controlling access with security groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html) in the *Amazon Relational Database Service User Guide*.

## Using an on-premises source data provider
<a name="vpc-peering-on-premesis"></a>

In this configuration, AWS DMS connects to your source data provider within the public network. AWS DMS uses a private network to connect to your target data provider.

**Note**  
For homogeneous data migrations, AWS DMS connects to your source database within the public network. However, connectivity to a source database within a public network is not always possible. For more information, see [ Migrate an on-premises MySQL database to Amazon Aurora MySQL over a private network using AWS DMS homogeneous data migration and Network Load Balancer ](https://aws.amazon.com/blogs/database/migrate-an-on-premises-mysql-database-to-amazon-aurora-mysql-over-a-private-network-using-aws-dms-homogeneous-data-migration-and-network-load-balancer/).

**To configure a network for your source on-premises data provider**

1. Create the subnet group in the AWS DMS console with the VPC and subnets that your target data provider uses. For more information, see [Creating a subnet group](subnet-group.md).

1. Create the instance profile in the AWS DMS console with the VPC and the subnet group that you created. Also, choose VPC security groups that your target data provider uses. For more information, see [Creating instance profiles](instance-profiles.md).

1. Open your migration project. On the **Data migrations** tab, choose your data migration. Take a note of the **public IP address** under **Connectivity and security** on the **Details** tab.

1. Allow access to your source database from the public IP address of your data migration in AWS DMS.

AWS DMS creates inbound or outbound rules in in VPC security groups. Make sure that you don't delete these rules because this action can lead to a failure of your data migration. You can configure your own rules in VPC security groups. We recommended that you add a description to your rules so that you can manage them.

## Configuring ongoing data replication
<a name="vpc-peering-ongoing-replication"></a>

To run data migrations of the **Full load and change data capture (CDC)** or **Change data capture (CDC)** type, you must allow connection between your source and target databases.

**To configure a connection between your publicly accessible source and target databases**

1. Take a note of the public IP addresses of your source and target databases.

1. Allow access to your source database from the public IP address of your target database.

1. Allow access to your target database from the public IP address of your source database.

**To configure a connection between your source and target databases that are privately accessible in a single VPC**

1. Take a note of the private IP addresses of your source and target databases.
**Important**  
If your source and target databases are in different VPCs or in different networks, then you can only use public IP addresses for your source and target databases. You can only use public hostnames or IP addresses in data providers.

1. Allow access to your source database from the security group of your target database.

1. Allow access to your target database from the security group of your source database.

# Creating source data providers for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source"></a>

AWS DMS Schema Conversion supports the following databases as source data providers for [Homogeneous data migrations](data-migrations.md) projects: Microsoft SQL Server, Oracle, PostgreSQL, MySQL, IBM DB2 for z/OS, and SAP ASE (Sybase ASE). 

For supported database versions, see [Source data providers for DMS homogeneous data migrations](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.HomogeneousDataMigrations).

Your source data provider can be an on-premises, Amazon EC2, or Amazon RDS database.

**Topics**
+ [

# Using a MySQL compatible database as a source for homogeneous data migrations in AWS DMS
](dm-data-providers-source-mysql.md)
+ [

# Using a PostgreSQL database as a source for homogeneous data migrations in AWS DMS
](dm-data-providers-source-postgresql.md)
+ [

# Using a MongoDB compatible database as a source for homogeneous data migrations in AWS DMS
](dm-data-providers-source-mongodb.md)

# Using a MySQL compatible database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-mysql"></a>

You can use a MySQL-compatible database (MySQL or MariaDB) as a source for [Homogeneous data migrations](data-migrations.md) in AWS DMS. In this case, your source data provider can be an on-premises, Amazon EC2, or RDS for MySQL or MariaDB database.

To run homogeneous data migrations, you must use a database user with the `SELECT` privileges for the all source tables and secondary objects for replication. For change data capture (CDC) tasks, this user must also have the `REPLICATION CLIENT` (`BINLOG MONITOR` for MariaDB versions later than 10.5.2) and `REPLICATION SLAVE` privileges. For a full load data migration, you don't need these two privileges.

Use the following script to create a database user with the required permissions in your MySQL database. Run the `GRANT` queries for all databases that you migrate to AWS.

```
CREATE USER 'your_user'@'%' IDENTIFIED BY 'your_password';

GRANT REPLICATION SLAVE, REPLICATION CLIENT  ON *.* TO 'your_user'@'%';
GRANT SELECT, RELOAD, LOCK TABLES, SHOW VIEW, EVENT, TRIGGER ON *.* TO 'your_user'@'%';

GRANT BACKUP_ADMIN ON *.* TO 'your_user'@'%';
```

In the preceding example, replace each *user input placeholder* with your own information. If your source MySQL database version is lower than 8.0, then you can skip the `GRANT BACKUP_ADMIN` command.

Use the following script to create a database user with the required permissions in your MariaDB database. Run the GRANT queries for all databases that you migrate to AWS.

```
CREATE USER 'your_user'@'%' IDENTIFIED BY 'your_password';
GRANT SELECT, RELOAD, LOCK TABLES, REPLICATION SLAVE, BINLOG MONITOR, SHOW VIEW ON  *.* TO 'your_user'@'%';
```

In the preceding example, replace each *user input placeholder* with your own information. 

The following sections describe specific configuration prerequisites for self-managed and AWS-managed MySQL databases.

**Topics**
+ [

## Using a self-managed MySQL compatible database as a source for homogeneous data migrations
](#dm-data-providers-source-mysql-sm)
+ [

## Using an AWS-managed MySQL compatible database as a source for homogeneous data migrations in AWS DMS
](#dm-data-providers-source-mysql-aws)
+ [

## Limitations for using a MySQL compatible database as a source for homogeneous data migrations
](#dm-data-providers-source-mysql-limitations)

## Using a self-managed MySQL compatible database as a source for homogeneous data migrations
<a name="dm-data-providers-source-mysql-sm"></a>

This section describes how to configure your MySQL compatible databases that are hosted on-premises or on Amazon EC2 instances.

Check the version of your source MySQL or MariaDB database. Make sure that AWS DMS supports your source MySQL or MariaDB database version as described in [Sources for DMS homogeneous data migrations](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.HomogeneousDataMigrations).

To use CDC, make sure to enable binary logging. To enable binary logging, configure the following parameters in the `my.ini` (Windows) or `my.cnf` (UNIX) file of your MySQL or MariaDB database.


| Parameter | Value | 
| --- | --- | 
| `server-id` | Set this parameter to a value of 1 or greater. | 
| `log-bin` | Set the path to the binary log file, such as `log-bin=E:\MySql_Logs\BinLog`. Don't include the file extension. | 
| `binlog_format` | Set this parameter to `ROW`. We recommend this setting during replication because in certain cases when `binlog_format` is set to `STATEMENT`, it can cause inconsistency when replicating data to the target. The database engine also writes similar inconsistent data to the target when `binlog_format` is set to `MIXED`, because the database engine automatically switches to `STATEMENT`-based logging. | 
| `expire_logs_days` | Set this parameter to a value of 1 or greater. To prevent overuse of disk space, we recommend that you don't use the default value of 0. | 
| `binlog_checksum` | Set this parameter to `NONE`. | 
| `binlog_row_image` | Set this parameter to `FULL`. | 
| `log_slave_updates` | Set this parameter to `TRUE` if you are using a MySQL or MariaDB replica as a source. | 

## Using an AWS-managed MySQL compatible database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-mysql-aws"></a>

This section describes how to configure your Amazon RDS for MySQL and Amazon RDS for MariaDB database instances.

When you use an AWS-managed MySQL or MariaDB database as a source for homogeneous data migrations in AWS DMS, make sure that you have the following prerequisites for CDC:
+ To enable binary logs for RDS for MySQL and MariaDB, enable automatic backups at the instance level. To enable binary logs for an Aurora MySQL cluster, change the variable `binlog_format` in the parameter group. You don't need to enable automatic backups for an Aurora MySQL cluster.

  Next, set the `binlog_format` parameter to `ROW`.

  For more information about setting up automatic backups, see [Enabling automated backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.Enabling) in the *Amazon RDS User Guide*.

  For more information about setting up binary logging for an Amazon RDS for MySQL or MariaDB database, see [ Setting the binary logging format](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQL.BinaryFormat.html) in the *Amazon RDS User Guide*. 

  For more information about setting up binary logging for an Aurora MySQL cluster, see [ How do I turn on binary logging for my Amazon Aurora MySQL cluster?](https://aws.amazon.com/premiumsupport/knowledge-center/enable-binary-logging-aurora/). 
+ Ensure that the binary logs are available to AWS DMS. Because AWS-managed MySQL and MariaDB databases purge the binary logs as soon as possible, you should increase the length of time that the logs remain available. For example, to increase log retention to 24 hours, run the following command. 

  ```
  call mysql.rds_set_configuration('binlog retention hours', 24);
  ```
+ Set the `binlog_row_image` parameter to `Full`. 
+ Set the `binlog_checksum` parameter to `NONE`.
+ If you are using an Amazon RDS MySQL or MariaDB replica as a source, enable backups on the read replica, and ensure the `log_slave_updates` parameter is set to `TRUE`.

## Limitations for using a MySQL compatible database as a source for homogeneous data migrations
<a name="dm-data-providers-source-mysql-limitations"></a>

The following limitations apply when using a MySQL compatible database as a source for homogeneous data migrations:
+ MariaDB objects such as sequences are not supported in homogeneous migration tasks.
+ Migration from MariaDB to Amazon RDS MySQL/Aurora MySQL might fail due to incompatible object differences.
+ The username you use to connect to your data source has the following limitations:
  + Can be 2 to 64 characters in length.
  + Can't have spaces.
  + Can include the following characters: a-z, A-Z, 0-9, underscore (\$1).
  + Must start with a-z or A-Z.
+ The password you use to connect to your data source has the following limitations:
  + Can be 1 to 128 characters in length.
  + Can't contain any of the following: single quote ('), double quote ("), semicolon (;) or space.
+ AWS DMS homogeneous data migrations creates unencrypted MySQL and MariaDB objects on the target Amazon RDS instances even if the source objects were encrypted. RDS for MySQL doesn't support the MySQL keyring\$1aws AWS Keyring Plugin required for encrypted objects. Refer to the [MySQL Keyring Plugin not supported documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.KnownIssuesAndLimitations.html#MySQL.Concepts.Limits.KeyRing) in the Amazon RDS User Guide
+ AWS DMS does not use Global Transaction Identifiers (GTIDs) for for data replication even if the source data contains them.

# Using a PostgreSQL database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-postgresql"></a>

You can use a PostgreSQL database as a source for [Homogeneous data migrations](data-migrations.md) in AWS DMS. In this case, your source data provider can be an on-premises, Amazon EC2, or RDS for PostgreSQL database.

To run homogeneous data migrations, grant superuser permissions for the database user that you specified in AWS DMS for your PostgreSQL source database. The database user needs superuser permissions to access replication-specific functions in the source. For a full load data migration, your database user needs `SELECT` permissions on tables to migrate them.

Use the following script to create a database user with the required permissions in your PostgreSQL source database. Run the `GRANT` query for all databases that you migrate to AWS.

```
CREATE USER your_user WITH LOGIN PASSWORD 'your_password';
ALTER USER your_user WITH SUPERUSER;
GRANT SELECT ON ALL TABLES IN SCHEMA schema_name TO your_user;
```

In the preceding example, replace each *user input placeholder* with your own information.

AWS DMS supports CDC for PostgreSQL tables with primary keys. If a table doesn't have a primary key, the write-ahead logs (WAL) don't include a before image of the database row. Here, you can use additional configuration settings and use table replica identity as a workaround. However, this approach can generate extra logs. We recommend that you use table replica identity as a workaround only after careful testing. For more information, see [Additional configuration settings when using a PostgreSQL database as a DMS source](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.Advanced).

The following sections describe specific configuration prerequisites for self-managed and AWS-managed PostgreSQL databases.

**Topics**
+ [

## Using a self-managed PostgreSQL database as a source for homogeneous data migrations in AWS DMS
](#dm-data-providers-source-postgresql-sm)
+ [

## Using an AWS-managed PostgreSQL database as a source for homogeneous data migrations in AWS DMS
](#dm-data-providers-source-postgresql-aws)
+ [

## Limitations for using a PostgreSQL compatible database as a source for homogeneous data migrations
](#dm-data-providers-source-postgresql-limitations)

## Using a self-managed PostgreSQL database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-postgresql-sm"></a>

This section describes how to configure your PostgreSQL databases that are hosted on-premises or on Amazon EC2 instances.

Check the version of your source PostgreSQL database. Make sure that AWS DMS supports your source PostgreSQL database version as described in [Sources for DMS homogeneous data migrations](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.HomogeneousDataMigrations).

Homogeneous data migrations support change data capture (CDC) using logical replication. To turn on logical replication on a self-managed PostgreSQL source database, set the following parameters and values in the `postgresql.conf` configuration file:
+ Set `wal_level` to `logical`.
+ Set `max_replication_slots` to a value greater than 1.

  Set the `max_replication_slots` value according to the number of tasks that you want to run. For example, to run five tasks you set a minimum of five slots. Slots open automatically as soon as a task starts and remain open even when the task is no longer running. Make sure to manually delete open slots.
+ Set `max_wal_senders` to a value greater than 1.

  The `max_wal_senders` parameter sets the number of concurrent tasks that can run.
+ The `wal_sender_timeout` parameter ends replication connections that are inactive longer than the specified number of milliseconds. The default is 60000 milliseconds (60 seconds). Setting the value to 0 (zero) disables the timeout mechanism, and is a valid setting for DMS.

Some parameters are static, and you can only set them at server start. Any changes to their entries in the configuration file are ignored until the server is restarted. For more information, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/intro-whatis.html).

## Using an AWS-managed PostgreSQL database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-postgresql-aws"></a>

This section describes how to configure your Amazon RDS for PostgreSQL database instances.

Use the AWS master user account for the PostgreSQL DB instance as the user account for the PostgreSQL source data provider for homogeneous data migrations in AWS DMS. The master user account has the required roles that allow it to set up CDC. If you use an account other than the master user account, then the account must have the `rds_superuser` role and the `rds_replication` role. The `rds_replication` role grants permissions to manage logical slots and to stream data using logical slots.

Use the following code example grant the `rds_superuser` and `rds_replication` roles.

```
GRANT rds_superuser to your_user;
GRANT rds_replication to your_user;
```

In the preceding example, replace *your\$1user* with the name of your database user.

To turn on logical replication, set the `rds.logical_replication` parameter in your DB parameter group to 1. This static parameter requires a reboot of the DB instance to take effect.

## Limitations for using a PostgreSQL compatible database as a source for homogeneous data migrations
<a name="dm-data-providers-source-postgresql-limitations"></a>

The following limitations apply when using a PostgreSQL compatible database as a source for homogeneous data migrations:
+ The username you use to connect to your data source has the following limitations:
  + Can be 2 to 64 characters in length.
  + Can't have spaces.
  + Can include the following characters: a-z, A-Z, 0-9, underscore (\$1).
  + Must start with a-z or A-Z.
+ The password you use to connect to your data source has the following limitations:
  + Can be 1 to 128 characters in length.
  + Can't contain any of the following: single quote ('), double quote ("), semicolon (;) or space.

# Using a MongoDB compatible database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-mongodb"></a>

You can use a MongoDB-compatible database as a source for Homogeneous data migrations in AWS DMS. In this case, your source data provider can be an on-premises, Amazon EC2 for MongoDB database or Amazon DocumentDB (with MongoDB compatibility) database.

For supported database versions, see [Source data providers for DMS homogeneous data migrations](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.HomogeneousDataMigrations).

The following sections describe specific configuration prerequisites for self-managed MongoDB databases and AWS-managed Amazon DocumentDB databases.

**Topics**
+ [

## Using a self-managed MongoDB database as a source for homogeneous data migrations in AWS DMS
](#dm-data-providers-source-mongodb-sm)
+ [

## Using an Amazon DocumentDB database as a source for homogeneous data migrations in AWS DMS
](#dm-data-providers-source-mongodb-aws)
+ [

## Features for using a MongoDB-compatible database as a source for homogeneous data migrations
](#dm-data-providers-source-mongodb-features)
+ [

## Limitations for using a MongoDB-compatible database as a source for homogeneous data migrations
](#dm-data-providers-source-mongodb-limitations)
+ [

## Best practices for using a MongoDB-compatible database as a source for homogeneous data migrations
](#dm-data-providers-source-mongodb-bestpractices)

## Using a self-managed MongoDB database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-mongodb-sm"></a>

This section describes how to configure your MongoDB databases that are hosted on-premises or on Amazon EC2 instances.

Check the version of your source MongoDB database. Make sure that AWS DMS supports your source MongoDB database version as described in [Source data providers for DMS homogeneous data migrations](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.HomogeneousDataMigrations).

To run homogeneous data migrations with a MongoDB source, you can create either a user account with root privileges, or a user with permissions only on the database to migrate. For more information about user creation, see [Permissions needed when using MongoDB as a source for AWS DMS](CHAP_Source.MongoDB.md#CHAP_Source.MongoDB.PrerequisitesCDC).

To use ongoing replication or CDC with MongoDB, AWS DMS requires access to the MongoDB operations log (oplog). For more information, see [Configuring a MongoDB replica set for CDC](CHAP_Source.MongoDB.md#CHAP_Source.MongoDB.PrerequisitesCDC.ReplicaSet). 

For information about MongoDB authentication methods, see [Security requirements when using MongoDB as a source for AWS DMS](CHAP_Source.MongoDB.md#CHAP_Source.MongoDB.Security).

For MongoDB as a source, homogeneous data migrations supports all of the datatypes that Amazon DocumentDB supports.

For MongoDB as a source, to store user credentials in Secrets Manager, you need to provide them in plain text, using the **Other type of secrets** type. For more information, see [Using secrets to access AWS Database Migration Service endpoints](security_iam_secretsmanager.md).

The following code sample demonstrates how to store database secrets using plain text.

```
{
  "username": "dbuser",
  "password": "dbpassword"
}
```

## Using an Amazon DocumentDB database as a source for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-source-mongodb-aws"></a>

This section describes how to configure your Amazon DocumentDB database instances for use as a source for homogeneous data migrations.

Use the master username for the Amazon DocumentDB instance as the user account for the MongoDB-compatible source data provider for homogeneous data migrations in AWS DMS. The master user account has the required roles that allow it to set up CDC. If you use an account other than the master user account, then the account must have the root role. For more information on the user creation as a root account, see [Setting permissions to use Amazon DocumentDB as a source](CHAP_Source.DocumentDB.md#CHAP_Source.DocumentDB.Permissions).

To turn on logical replication, set the `change_stream_log_retention_duration` parameter in your database parameter group to a setting appropriate for your transaction workload. Changing this static parameter requires you to reboot your DB instance to take effect. Before starting data migration for all the task types including Full Load Only, enable Amazon DocumentDB change streams for all collections within a given database, or only for selected collections. For more information about enabling change streams for Amazon DocumentDB, see [ Enabling Change Streams ](https://docs.aws.amazon.com/documentdb/latest/developerguide/change_streams.html#change_streams-enabling) in the *Amazon DocumentDB developer guide*. 

**Note**  
AWS DMS uses the Amazon DocumentDB change stream to capture changes during ongoing replication. If Amazon DocumentDB flushes out the records from the change stream before DMS reads them, your tasks will fail. We recommend setting the `change_stream_log_retention_duration` parameter to retain changes for at least 24 hours.

To use Amazon DocumentDB for homogeneous data migration, store user credentials in Secrets Manager under **Credentials for Amazon DocumentDB database**.

## Features for using a MongoDB-compatible database as a source for homogeneous data migrations
<a name="dm-data-providers-source-mongodb-features"></a>
+ You can migrate all the secondary indexes that Amazon DocumentDB supports during the Full load phase.
+ AWS DMS migrates collections in parallel. homogeneous data migrations calculates segments at runtime based on the average size of each document in the collection for maximum performance.
+ DMS can replicate the secondary indexes that you create in the CDC phase. DMS supports this feature in MongoDB version 6.0.
+ DMS supports documents with a nesting level greater than 97.

## Limitations for using a MongoDB-compatible database as a source for homogeneous data migrations
<a name="dm-data-providers-source-mongodb-limitations"></a>
+ Documents can't have field names with a `$` prefix.
+ AWS DMS doesn't support time series collection migration.
+ AWS DMS doesn't support `create`, `drop`, or `rename collection` DDL events during the CDC phase.
+ AWS DMS doesn't suport inconsistent datatypes in the collection for the `_id` field. For example, the following unsupported collection has multiple data types for the `_id` field.

  ```
  rs0 [direct: primary] test> db.collection1.aggregate([
  ...   {
  ...     $group: {
  ...       _id: { $type: "$_id" },
  ...       count: { $sum: 1 }
  ...     }
  ...   }
  ... ])
  [ { _id: 'string', count: 6136 }, { _id: 'objectId', count: 848033 } ]
  ```
+ For CDC-only tasks, AWS DMS only supports the `immediate` start mode.
+ AWS DMS doesn't support documents with invalid UTF8 characters.
+ AWS DMS doesn't support sharded collections.

## Best practices for using a MongoDB-compatible database as a source for homogeneous data migrations
<a name="dm-data-providers-source-mongodb-bestpractices"></a>
+ For multiple large databases and collections hosted on same MongoDB instance, we recommend you use selection rules for each database and collection to split the task between multiple data migration tasks and projects. You can tune your database and collection divisions for maximum performance.

# Creating and setting a target database to work with AWS DMS schema conversion
<a name="dm-data-providers-target"></a>

You can use MySQL-compatible, PostgreSQL, and Amazon DocumentDB databases as a target data provider for homogeneous data migrations in AWS DMS.

For supported database versions, see [Target data providers for DMS homogeneous data migrations](CHAP_Introduction.Targets.md#CHAP_Introduction.Targets.HomogeneousDataMigrations).

Your target data provider can be an Amazon RDS DB instance or an Amazon Aurora DB cluster. Note that the database version of your target data provider must be equal or higher than the database version of your source data provider.

**Topics**
+ [

# Using a MySQL compatible database as a target for homogeneous data migrations in AWS DMS
](dm-data-providers-target-mysql.md)
+ [

# Using a PostgreSQL database as a target for homogeneous data migrations in AWS DMS
](dm-data-providers-target-postgresql.md)
+ [

# Using an Amazon DocumentDB database as a target for homogeneous data migrations in AWS DMS
](dm-data-providers-target-docdb.md)

# Using a MySQL compatible database as a target for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-target-mysql"></a>

You can use a MySQL compatible database as a migration target for homogeneous data migrations in AWS DMS.

AWS DMS requires certain permissions to migrate data into your target Amazon RDS for MySQL or MariaDB or Amazon Aurora MySQL database. Use the following script to create a database user with the required permissions in your MySQL target database.

In this example, replace each *user input placeholder* with your own information. If your target MariaDB database version is lower than 10.5, then you can skip the `GRANT SLAVE MONITOR` command.

```
CREATE USER 'your_user'@'%' IDENTIFIED BY 'your_password';

GRANT ALTER, CREATE, DROP, INDEX, INSERT, UPDATE, DELETE, SELECT, CREATE VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER, EXECUTE, REFERENCES ON *.* TO 'your_user'@'%';
GRANT REPLICATION SLAVE, REPLICATION CLIENT  ON *.* TO 'your_user'@'%'; GRANT SLAVE MONITOR  ON *.* TO 'your_user'@'%';
```

In the preceding example, replace each *user input placeholder* with your own information.

Use the following script to create a database user with the required permissions in your MariaDB database. Run the GRANT queries for all databases that you migrate to AWS.

```
CREATE USER 'your_user'@'%' IDENTIFIED BY 'your_password';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER, EXECUTE,SLAVE MONITOR, REPLICATION SLAVE ON *.* TO 'your_user'@'%';
```

In the preceding example, replace each *user input placeholder* with your own information.

**Note**  
In Amazon RDS, when you turn on automated backup for a MySQL/Maria database instance, you also turn on binary logging. When these settings are enabled, your data migration task may fail with the following error while creating secondary objects such as functions, procedures, and triggers on the target database. If your target database has binary logging enabled, then set `log_bin_trust_function_creators` to `true` in the database parameter group before starting the task.  

```
ERROR 1419 (HY000): You don't have the SUPER privilege and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)
```

## Limitations for using a MySQL compatible database as a target for homogeneous data migrations
<a name="dm-data-providers-target-mysql-limitations"></a>

The following limitations apply when using a MySQL compatible database as a target for homogeneous data migrations:
+ The username you use to connect to your data source has the following limitations:
  + Can be 2 to 64 characters in length.
  + Can't have spaces.
  + Can include the following characters: a-z, A-Z, 0-9, underscore (\$1).
  + Can't include a hyphen (-).
  + Must start with a-z or A-Z.
+ The password you use to connect to your data source has the following limitations:
  + Can be 1 to 128 characters in length.
  + Can't contain any of the following: single quote ('), double quote ("), semicolon (;) or space.

# Using a PostgreSQL database as a target for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-target-postgresql"></a>

You can use a PostgreSQL database as a migration target for homogeneous data migrations in AWS DMS.

AWS DMS requires certain permissions to migrate data into your target Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL database. Use the following script to create a database user with the required permissions in your PostgreSQL target database.

```
CREATE USER your_user WITH LOGIN PASSWORD 'your_password';
GRANT USAGE ON SCHEMA schema_name TO your_user;
GRANT CONNECT ON DATABASE db_name to your_user;
GRANT CREATE ON DATABASE db_name TO your_user;
GRANT CREATE ON SCHEMA schema_name TO your_user;
GRANT UPDATE, INSERT, SELECT, DELETE, TRUNCATE ON ALL TABLES IN SCHEMA schema_name TO your_user;
            #For "Full load and change data capture (CDC)" and "Change data capture (CDC)" data migrations, setting up logical replication requires rds_superuser privileges
GRANT rds_superuser TO your_user;
```

In the preceding example, replace each *user input placeholder* with your own information.

To turn on logical replication for your RDS for PostgreSQL target, set the `rds.logical_replication` parameter in your DB parameter group to 1. This static parameter requires a reboot of the DB instance or DB cluster to take effect. Some parameters are static, and you can only set them at server start. AWS DMS ignores changes to their entries in the DB parameter group until you restart the server.

PostgreSQL uses triggers to implement foreign key constraints. During the full load phase, AWS DMS loads each table one at a time. We recommend that you turn off foreign key constraints on your target database during the full load. To do so, use one of the following methods.
+ Temporarily turn off all triggers for your instance, and finish the full load.
+ Change the value of the `session_replication_role` parameter in PostgreSQL.

  At any given time, a trigger can be in one of the following states: `origin`, `replica`, `always`, or `disabled`. When you set the `session_replication_role` parameter to `replica`, only triggers in the `replica` state are active. Otherwise, the triggers remain inactive.

## Limitations for using a PostgreSQL compatible database as a target for homogeneous data migrations
<a name="dm-data-providers-target-postgresql-limitations"></a>

The following limitations apply when using a PostgreSQL compatible database as a target for homogeneous data migrations:
+ The username you use to connect to your data source has the following limitations:
  + Can be 2 to 64 characters in length.
  + Can't have spaces.
  + Can include the following characters: a-z, A-Z, 0-9, underscore (\$1).
  + Must start with a-z or A-Z.
+ The password you use to connect to your data source has the following limitations:
  + Can be 1 to 128 characters in length.
  + Can't contain any of the following: single quote ('), double quote ("), semicolon (;) or space.

# Using an Amazon DocumentDB database as a target for homogeneous data migrations in AWS DMS
<a name="dm-data-providers-target-docdb"></a>

You can use an Amazon DocumentDB (with MongoDB compatibility) database and DocumentDB Elastic cluster as a migration target for homogeneous data migrations in AWS DMS. 

To run homogeneous data migrations for an Amazon DocumentDB target, you can create either a user account with administrator privileges, or a user with read/write permissions only on the database to migrate.

Homogeneous data migrations supports all of the BSON data types that Amazon DocumentDB supports. For a list of these data types, see [ Data Types ](https://docs.aws.amazon.com/documentdb/latest/developerguide/mongo-apis.html#mongo-apis-data-types) in the *Amazon DocumentDB Developer Guide*.

To use shard features of DocumentDB Elastic cluster for migrating non-sharded collection from the source, create a shard collection to migrate prior to starting the data migration task. For more information about shard collection in an Amazon DocumentDB Elastic cluster, see [ Step 5: Shard your collection ](https://docs.aws.amazon.com/documentdb/latest/developerguide/elastic-get-started.html#elastic-get-started-step6) in the *Amazon DocumentDB Developer Guide*.

For an Amazon DocumentDB target, AWS DMS supports the `none` or `require` SSL modes.

# Running homogeneous data migrations in AWS DMS
<a name="dm-migrating-data"></a>

You can use [Homogeneous data migrations](data-migrations.md) in AWS DMS to migrate data from your source database to the equivalent engine on Amazon Relational Database Service (Amazon RDS) , Amazon Aurora, or Amazon DocumentDB. AWS DMS automates the data migration process by using native database tools in your source and target databases.

After you create an instance profile and compatible data providers for homogeneous data migrations, create a migration project. For more information, see [ Creating migration projects](migration-projects-create.md).

The following sections describe how to create, configure, and run homogeneous data migrations.

**Topics**
+ [

# Creating a data migration in AWS DMS
](dm-migrating-data-create.md)
+ [

# Selection rules for homogeneous data migrations
](dm-migrating-data-selectionrules.md)
+ [

# Managing data migrations in AWS DMS
](dm-migrating-data-manage.md)
+ [

# Monitoring data migrations in AWS DMS
](dm-migrating-data-monitoring.md)
+ [

# Statuses of homogeneous data migrations in AWS DMS
](dm-migrating-data-statuses.md)
+ [

# Migrating data from MySQL databases with homogeneous data migrations in AWS DMS
](dm-migrating-data-mysql.md)
+ [

# Migrating data from PostgreSQL databases with homogeneous data migrations in AWS DMS
](dm-migrating-data-postgresql.md)
+ [

# Migrating data from MongoDB databases with homogeneous data migrations in AWS DMS
](dm-migrating-data-mongodb.md)
+ [

# Target table preparation mode
](dm-migrating-data-table-prep.md)

# Creating a data migration in AWS DMS
<a name="dm-migrating-data-create"></a>

After you create a migration project with compatible data providers of the same type, you can use this project for homogeneous data migrations. For more information, see [ Creating migration projects](migration-projects-create.md).

To start using homogeneous data migrations, create a new data migration. You can create several homogeneous data migrations of different types in a single migration project.

AWS DMS has the maximum number of homogeneous data migrations that you can create for your AWS account. See the following section for information about AWS DMS service quotas [Quotas for AWS Database Migration Service](CHAP_Limits.md).

Before you create a data migration, make sure that you set up the required resources such as your source and target databases, an IAM policy and role, an instance profile, and data providers. For more information, see [Creating IAM resources](dm-iam-resources.md), [Creating instance profiles](instance-profiles.md), and [ Creating data providers](data-providers-create.md).

Also, we recommend that you don't use homogeneous data migrations to migrate data from a higher database version to a lower database version. Check the versions of databases that you use for source and target data providers, and upgrade your target database version, if needed.

**To create a data migration**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project, and on the **Data migrations** tab, choose **Create data migration**.

1. For **Name**, enter a name for your data migration. Make sure that you use a unique name for your data migration so that you can easily identify it.

1. For **Replication type**, choose the type of data migration that you want to configure. You can choose one of the following options.
   + **Full load** — Migrates your existing source data.
   + **Full load and change data capture (CDC)** — Migrates your existing source data and replicates ongoing changes.
   + **Change data capture (CDC)** — Replicates ongoing changes.

1. Select the check box for **Turn on CloudWatch logs** to store data migration logs in Amazon CloudWatch. If you don't choose this option, then you can't see the log files when your data migration fails.

1. (Optional) Expand **Advanced settings**. For **Number of jobs**, enter the number of parallel threads that AWS DMS can use to migrate your source data to the target.

1. For **IAM service role**, choose the IAM role that you created in prerequisites. For more information, see [Creating an IAM role for homogeneous data migrations in AWS DMS](dm-iam-resources.md#dm-resources-iam-role).

1. Configure the **Start mode** for data migrations of the **Change data capture (CDC)** type. You can choose one of the following options.
   + **Immediately** — Starts the ongoing replication when you start your data migration.
   + **Using a native start point** — Starts the ongoing replication from the specified point.

     For PostgreSQL databases, enter the name of the logical replication slot for **Slot name** and enter the transaction log sequence number for **Native start point**.

     For MySQL databases, enter the transaction log sequence number for **Log sequence number (LSN)**.

1. Configure the **Stop mode** for data migrations of the **Change data capture (CDC)** or **Full load and change data capture (CDC)** type. You can choose one of the following options.
   + **Don’t stop CDC** — AWS DMS continues the ongoing replication until you stop your data migration.
   + **Using a server time point** — AWS DMS stops the ongoing replication at the specified time.

     If you choose this option, then for **Stop date and time**, enter the date and time when you want to automatically stop the ongoing replication.

1. Choose **Create data migration**.

AWS DMS creates your data migration and adds it to the list on the **Data migrations** tab in your migration project. Here you can see the status of your data migration. For more information, see [Migration statuses](dm-migrating-data-statuses.md).

**Important**  
For data migrations of the **Full load** and **Full load and change data capture (CDC)** type, AWS DMS deletes all data, tables, and other database objects on your target database. Make sure you have a backup of your target database.

After AWS DMS creates your data migration, the status of this data migration is set to **Ready**. To migrate your data, you must start the data migration manually. To do so, choose your data migration from the list. Next, for **Actions**, choose **Start**. For more information, see [Managing data migrations](dm-migrating-data-manage.md).

The first launch of a homogeneous data migration requires some setup. AWS DMS creates a serverless environment for your data migration. This process takes up to 15 minutes. After you stop and restart your data migration, AWS DMS doesn't create the environment again, and you can access your data migration faster.

# Selection rules for homogeneous data migrations
<a name="dm-migrating-data-selectionrules"></a>

You can use selection rules to choose the schema, tables, or both that you want to include in your replication.

When creating data migration task, choose **Add selection rule**.

For the rule settings, provide the following values:
+ **Schema**: Choose **Enter a schema**.
+ **Schema name**: Provide the name of the schema you want to replicate, or use **%** as a wildcard.
+ **Table name**: : Provide the name of the table you want to replicate, or use **%** as a wildcard.

By default, the only rule-action that DMS supports is `Include`, and the only wildcard character that DMS supports is `%`.

**Note**  
The support for selection rules AWS DMS for homogeneous data migrations varies based on the combination of the source database engine and the migration type chosen. PostgreSQL and MongoDB-compatible sources allow selection rules for all migration types, while MySQL sources only support selection rules for the Full Load migration type.

**Example Migrate all tables in a schema**  
The following example migrates all tables from a schema named `dmsst` in your source to your target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-action": "include",
            "object-locator": {
                "schema-name": "dmsst",
                "table-name": "%"
            },
            "filters": [],
            "rule-id": "1",
            "rule-name": "1"
        }
    ]
}
```

**Example Migrate some tables in a schema**  
The following example migrates all tables with a name starting with `collectionTest`, from a schema named `dmsst` in your source to your target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-action": "include",
            "object-locator": {
                "schema-name": "dmsst",
                "table-name": "collectionTest%"
            },
            "filters": [],
            "rule-id": "1",
            "rule-name": "1"
        }
    ]
}
```

**Example Migrate specific tables from multiple schemas**  
The following example migrates some of the tables from multiple schemas named `dmsst` and `Test` in your source to your target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-action": "include",
            "object-locator": {
                "schema-name": "dmsst",
                "table-name": "collectionTest1"
            },
            "filters": [],
            "rule-id": "1",
            "rule-name": "1"
        },
        {
            "rule-type": "selection",
            "rule-action": "include",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "products"
            },
            "filters": [],
            "rule-id": "2",
            "rule-name": "2"
        }
    ]
}
```

# Managing data migrations in AWS DMS
<a name="dm-migrating-data-manage"></a>

After you create a data migration, AWS DMS doesn't automatically start migrating data. You start a data migration manually when needed.

Before you start a data migration, you can modify all settings of your data migration. After you start your data migration, you can't change the replication type. To use another replication type, create a new data migration.

**To start a data migration**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project. On the **Data migrations** tab, choose your data migration. The **Summary** page for your data migration opens.

1. For **Actions**, choose **Start**.

   After this, AWS DMS creates a serverless environment for your data migration. This process takes up to 15 minutes.

After you start a data migration, AWS DMS sets its status to **Starting**. The next status that AWS DMS uses for your data migration, depends on the type of replication that you choose in the data migration settings. For more information, see [Migration statuses](dm-migrating-data-statuses.md).

**To modify a data migration**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project. On the **Data migrations** tab, choose your data migration. The **Summary** page for your data migration opens.

1. Choose **Modify**.

1. Configure the settings for your data migration.
**Important**  
If you have started a data migration, then you can't change the replication type.

1. To view your data migration logs in Amazon CloudWatch, select the check box for **Turn on CloudWatch logs**.

1. Choose **Save changes**.

After AWS DMS starts a data migration, you can stop it. To do so, choose your data migration on the **Data migrations** tab. Next, for **Actions**, choose **Stop**.

After you stop a data migration, AWS DMS sets its status to **Stopping**. Next, AWS DMS sets the status of this data migration to **Stopped**. After AWS DMS stops a data migration, you can modify, resume, restart, or delete your data migration.

To continue the data replication, choose the data migration that you stopped on the **Data migrations** tab. Next, for **Actions**, choose **Resume processing**.

To restart the data load, choose the data migration that you stopped on the **Data migrations** tab. Next, for **Actions**, choose **Restart**. AWS DMS deletes all data from your target database and starts the data migration from scratch.

You can delete a data migration that you have stopped or that you haven't started. To delete a data migration, choose it on the **Data migrations** tab. Next, for **Actions**, choose **Delete**. To delete your migration project, stop and delete all data migrations.

# Monitoring data migrations in AWS DMS
<a name="dm-migrating-data-monitoring"></a>

After you start your homogeneous data migration, you can monitor its status and progress. Data migrations of large data sets such as hundreds of gigabytes take hours to complete. To maintain the reliability, availability, and high performance of your data migration, monitor its progress regularly.

**To check the status and progress of your data migration**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project and navigate to the **Data migrations** tab.

1. For your data migration, see the **Status** column. For more information about values in this column, see [Migration statuses](dm-migrating-data-statuses.md).

1. For a running data migration, the **Migration progress** column displays the percentage of migrated data.

**To check the details of your data migration**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project. On the **Data migrations** tab, choose your data migration.

1. On the **Details** tab, you can see the migration progress. Particularly, you can see the following metrics.
   + **Public IP address** – The public IP address of your data migration. You need this value to configure a network. For more information, see [Setting up a network](dm-network.md).
   + **Tables loaded** – The number of successfully loaded tables.
   + **Tables loading** – The number of tables currently loading.
   + **Tables queued** – The number of tables currently waiting to be loaded.
   + **Tables errored** – The number of tables that failed to load.
   + **Elapsed time** – The amount of time that passed after the start of your data migration.
   + **CDC latency** – The average time that passes between when a change occurs on a source table and when AWS DMS applies this change to the target table.
   + **Migration started** – The time when you started this data migration.
   + **Migration stopped** – The time when you stopped this data migration.

1. To view the log files for your data migration, choose **View CloudWatch logs** under **Homogeneous data migration settings**. You can **Turn on CloudWatch logs** when you create or modify a data migration. For more information, see [Creating a data migration](dm-migrating-data-create.md) and [Managing data migrations](dm-migrating-data-manage.md).

You can use Amazon CloudWatch alarms or events to closely track your data migration. For more information, see [What are Amazon CloudWatch, Amazon CloudWatch Events, and Amazon CloudWatch Logs?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) in the *Amazon CloudWatch User Guide*. Note that there is a charge for using Amazon CloudWatch.

For homogeneous data migrations, AWS DMS includes the following metrics in Amazon CloudWatch.


|  Metric  |  Description  | 
| --- | --- | 
| OverallCDCLatency |  The overall latency during the CDC phase. For MySQL databases, this metric shows the number of seconds that passes between the change in the source binary log and the replication of this change. For PostgreSQL databases, this metric shows the number of seconds that passes between `last_msg_receipt_time` and `last_msg_send_time` from the `pg_stat_subscription` view. Units: Seconds  | 
| StorageConsumption |  The storage that your data migration consumes. Units: Bytes  | 

# Statuses of homogeneous data migrations in AWS DMS
<a name="dm-migrating-data-statuses"></a>

For each data migration that you run, AWS DMS displays the **Status** in the AWS DMS console. The following list includes the available statuses.
+ `Creating` – AWS DMS is creating the data migration.
+ `Ready` – The data migration is ready to start.
+ `Starting` – AWS DMS is creating the serverless environment for your data migration. This process takes up to 15 minutes.
+ `Load running` – AWS DMS is performing the full load migration.
+ `Load complete, replication ongoing` – AWS DMS completed the full load and now replicates ongoing changes. AWS DMS uses this status only for data migrations of the full load and change data capture (CDC) type.
+ `Replication ongoing` – AWS DMS is replicating ongoing changes. AWS DMS uses this status only for migrations of the change data capture (CDC) type.
+ `Reloading target` – AWS DMS is restarting a data migration and performs the specified migration type.
+ `Stopping` – AWS DMS is stopping the data migration. AWS DMS sets this status after you choose to stop the data migration on the **Actions** menu.
+ `Stopped` – AWS DMS has stopped the data migration.
+ `Failed` – The data migration has failed. For more information, see the log files.

  To view the log files, choose your data migration on the **Data migrations** tab. Next, choose **View CloudWatch logs** under **Homogeneous data migration settings**.
**Important**  
You can view log files if you select the check box for **Turn on CloudWatch logs** when you create your data migration.
+ `Deleting` – AWS DMS is deleting the data migration. AWS DMS sets this status after you choose to delete the data migration on the **Actions** menu.
+ `Maintenance` – AWS DMS puts a task in maintenance mode status when new image is deployed on the underlying serverless container associated with your data migration task.

# Migrating data from MySQL databases with homogeneous data migrations in AWS DMS
<a name="dm-migrating-data-mysql"></a>

You can use [Homogeneous data migrations](data-migrations.md) to migrate a self-managed MySQL database to RDS for MySQL or Aurora MySQL. AWS DMS creates a serverless environment for your data migration. For different types of data migrations, AWS DMS uses different native MySQL database tools.

For homogeneous data migrations of the **Full load** type, AWS DMS uses mydumper to read data from your source database and store it on the disk attached to the serverless environment. After AWS DMS reads all your source data, it uses myloader in the target database to restore your data.

For homogeneous data migrations of the **Full load and change data capture (CDC)** type, AWS DMS uses mydumper to read data from your source database and store it on the disk attached to the serverless environment. After AWS DMS reads all your source data, it uses myloader in the target database to restore your data. After AWS DMS completes the full load, it sets up the binlog replication with the binlog position set to the start of the full load.

For homogeneous data migrations of the **Change data capture (CDC)** type, AWS DMS requires the **Native CDC start point** to start the replication. If you provide the native CDC start point, then AWS DMS captures changes from that point. Alternatively, choose **Immediately** in the data migration settings to automatically capture the start point for the replication when the actual data migration starts.

**Note**  
For a CDC-only migration to work properly, all source database schemas and objects must already be present on the target database. The target may have objects that are not present on the source, however.

You can use the following code example to get the current log sequence number (LSN) in your MySQL database.

```
show master status
```

This query returns a binlog file name and the position. For the native start point, use a combination of the binlog file name and the position. For example, `mysql-bin-changelog.000024:373`. In this example, `mysql-bin-changelog.000024` is the binlog file name and `373` is the position where AWS DMS starts capturing changes.

The following diagram shows the process of using homogeneous data migrations in AWS DMS to migrate a MySQL database to RDS for MySQL or Aurora MySQL.

![\[An architecture diagram of the MySQL data migration with DMS Homogeneous Data Migrations.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/data-migrations-mysql.png)


# Migrating data from PostgreSQL databases with homogeneous data migrations in AWS DMS
<a name="dm-migrating-data-postgresql"></a>

You can use [Homogeneous data migrations](data-migrations.md) to migrate a self-managed PostgreSQL database to RDS for PostgreSQL or Aurora PostgreSQL. AWS DMS creates a serverless environment for your data migration. For different types of data migrations, AWS DMS uses different native PostgreSQL database tools.

For homogeneous data migrations of the **Full load** type, AWS DMS uses pg\$1dump to read data from your source database and store it on the disk attached to the serverless environment. After AWS DMS reads all your source data, it uses pg\$1restore in the target database to restore your data.

For homogeneous data migrations of the **Full load and change data capture (CDC)** type, AWS DMS uses `pg_dump` to read schema objects without table data from your source database and store them on the disk attached to the serverless environment. It then uses `pg_restore` in the target database to restore your schema objects. After AWS DMS completes the `pg_restore` process, it automatically switches to a publisher and subscriber model for logical replication with the `Initial Data Synchronization` option to copy initial table data directly from the source database to the target database, and then initiates ongoing replication. In this model, one or more subscribers subscribe to one or more publications on a publisher node.

For homogeneous data migrations of the **Change data capture (CDC)** type, AWS DMS requires the native start point to start the replication. If you provide the native start point, then AWS DMS captures changes from that point. Alternatively, choose **Immediately** in the data migration settings to automatically capture the start point for the replication when the actual data migration starts.

**Note**  
For a CDC-only migration to work properly, all source database schemas and objects must already be present on the target database. The target may have objects that are not present on the source, however.

You can use the following code example to get the native start point in your PostgreSQL database.

```
select confirmed_flush_lsn from pg_replication_slots where slot_name=‘migrate_to_target';
```

This query uses the `pg_replication_slots` view in your PostgreSQL database to capture the log sequence number (LSN) value.

After AWS DMS sets the status of your PostgreSQL homogeneous data migration to **Stopped**, **Failed**, or **Deleted**, the publisher and replication aren't removed. If you don't want to resume the migration, then delete the replication slot and the publisher by using the following command.

```
SELECT pg_drop_replication_slot('migration_subscriber_{ARN}');
            DROP PUBLICATION publication_{ARN};
```

The following diagram shows the process of using homogeneous data migrations in AWS DMS to migrate a PostgreSQL database to RDS for PostgreSQL or Aurora PostgreSQL.

![\[An architecture diagram of the PostgreSQL data migration with DMS Homogeneous Data Migrations.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/data-migrations-postgresql.png)


## Best practices for using a PostgreSQL database as a source for homogeneous data migrations
<a name="dm-migrating-data-postgresql.bp"></a>
+ To speed up initial data syncing on the subscriber side for FLCDC task, you must adjust `max_logical_replication_workers` and `max_sync_workers_per_subscription`. Increasing these values enhances table synchronization speed.
  + **max\$1logical\$1replication\$1workers** – Specifies maximum number of logical replication workers. This includes both the apply workers on the subscriber side and the table synchronization workers. 
  + **max\$1sync\$1workers\$1per\$1subscription** – Increasing `max_sync_workers_per_subscription` only affects the number of tables that are synchronized in parallel, not the number of workers per table.
**Note**  
`max_logical_replication_workers` should not exceed `max_worker_processes`, and `max_sync_workers_per_subscription` should be less than or equal to `max_logical_replication_workers`.
+ For migrating large tables, consider dividing them into separate tasks using selection rules. For example, you can divide large tables into separate individual tasks and small tables into another single task.
+ Monitor disk and CPU usage on the subscriber side to maintain optimal performance.

# Migrating data from MongoDB databases with homogeneous data migrations in AWS DMS
<a name="dm-migrating-data-mongodb"></a>

You can use [Homogeneous data migrations](data-migrations.md) to migrate a self-managed MongoDB database to Amazon DocumentDB. AWS DMS creates a serverless environment for your data migration. For different types of data migrations, AWS DMS uses different native MongoDB database tools.

For homogeneous data migrations of the **Full load** type, AWS DMS uses `mongodump` to read data from your source database and store it on the disk attached to the serverless environment. After AWS DMS reads all your source data, it uses `mongorestore` in the target database to restore your data.

For homogeneous data migrations of the **Full load and change data capture (CDC)** type, AWS DMS uses `mongodump` to read data from your source database and store it on the disk attached to the serverless environment. After AWS DMS reads all your source data, it uses `mongorestore` in the target database to restore your data. After AWS DMS completes the full load, it automatically switches to a publisher and subscriber model for logical replication. In this model, we recommend sizing the oplog to retain changes for at least 24 hours.

For homogeneous data migrations of the **Change data capture (CDC)** type, choose `immediately` in the data migration settings to automatically capture the start point for the replication when the actual data migration starts.

**Note**  
For any new or renamed collection, you need to create a new data migration task for those collections as homogeneous data migrations. For a MongoDB-compatible source, AWS DMS doesn't support `create`, `rename` and `drop collection` operations.

The following diagram shows the process of using homogeneous data migrations in AWS DMS to migrate a MongoDB database to Amazon DocumentDB.

![\[An architecture diagram of the MongoDB data migration with DMS Homogeneous Data Migrations.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/data-migrations-mongodb.png)


# Target table preparation mode
<a name="dm-migrating-data-table-prep"></a>

You can select Target table preparation mode when you choose to a create data migration task under the **Advanced Settings** tab in the AWS DMS console for PostgreSQL, MongoDB, and Amazon DocumentDB migrations.

## Drop tables on target
<a name="dm-migrating-data-table-prep.dtot"></a>

In Drop tables on target mode, AWS DMS homogeneous migration drops the target tables and recreates them before starting the migration. This approach ensures that the target tables are empty at the start of the migration. During homogeneous migrations, AWS DMS creates all secondary objects, including indexes defined in the source table metadata, before loading the data to ensure efficient data migration.

When using the Drop tables on target mode, you might need to configure the target database. For example, with a PostgreSQL target, AWS DMS cannot create a schema user for security reasons. In this case, you must pre-create the schema user to match the source, allowing AWS DMS to create the tables and assign them to a similar role as the source when the migration begins.

## Truncate
<a name="dm-migrating-data-table-prep.truncate"></a>

In Truncate mode, AWS DMS homogeneous migration truncates all existing target tables before the migration begins. This preserves the table structure. This mode is suitable for full load or full load plus CDC migrations where the target schema is pre-created. For an Amazon DocumentDB target, if the collection does not exist, AWS DMS creates the collection without indexes during the migration.

## Do nothing
<a name="dm-migrating-data-table-prep.donothing"></a>

In Do nothing mode, AWS DMS homogeneous migration assumes that the target tables are pre-created. If the target tables are not empty, data conflicts may occur during migration, potentially causing a DMS task error. In this mode, the table structure remains unchanged, and any existing data is preserved. Do nothing mode is suitable for CDC-only tasks when the target tables have been backfilled from the source, and ongoing replication is used to synchronize the source and target. For an Amazon DocumentDB target, if the collection does not exist, AWS DMS creates the collection without secondary indexes. Additionally, Do nothing mode can be used during the Full Load phase when migrating data from a MongoDB sharded collection to Amazon DocumentDB.

# Troubleshooting for homogeneous data migrations in AWS DMS
<a name="dm-troubleshooting"></a>

In the following list, you can find actions to take when you encounter issues with homogeneous data migrations in AWS DMS.

**Topics**
+ [

## I can't create a homogeneous data migration in AWS DMS
](#dm-troubleshooting-create)
+ [

## I can't start a homogeneous data migration in AWS DMS
](#dm-troubleshooting-dm-fails)
+ [

## I can't connect to the target database when running a data migration in AWS DMS
](#dm-troubleshooting-connect-target)
+ [

## AWS DMS migrates views as tables in PostgreSQL
](#dm-troubleshooting-views)

## I can't create a homogeneous data migration in AWS DMS
<a name="dm-troubleshooting-create"></a>

If you get an error message that says that AWS DMS can't connect to your data providers after you choose **Create data migration**, then make sure that you have configured the required IAM role. For more information, see [Creating an IAM role](dm-iam-resources.md#dm-resources-iam-role).

If you have configured the IAM role and still get this error message, then add this IAM role to your key user in the AWS KMS key configuration. For more information, see [Allows key users to use the KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-default.html#key-policy-default-allow-users) in the *AWS Key Management Service Developer Guide*.

## I can't start a homogeneous data migration in AWS DMS
<a name="dm-troubleshooting-dm-fails"></a>

If you get the `Failed` status when you start a data migration in your migration project, check the versions of your source and target data providers. To do so, run the `SELECT VERSION();` query in your MySQL or PostgreSQL database. Make sure that you use the supported database version.

For the list of supported source databases, see [Sources for DMS homogeneous data migrations](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.HomogeneousDataMigrations).

For the list of supported target databases, see [Targets for DMS homogeneous data migrations](CHAP_Introduction.Targets.md#CHAP_Introduction.Targets.HomogeneousDataMigrations).

If you use an unsupported database version, then upgrade your source or target database, and try again.

Check the error message for your data migration in the AWS DMS console. To do so, open your migration project, and choose your data migration. On the **Details** tab, check the **Last failure message** under **General**.

Finally, analyze the CloudWatch log. To do so, open your migration project, and choose your data migration. On the **Details** tab, choose **View CloudWatch logs**.

## I can't connect to the target database when running a data migration in AWS DMS
<a name="dm-troubleshooting-connect-target"></a>

If you get the **Unable to connect to target** error message, then perform the following actions.

1. Make sure that the security group that is attached to your source and target databases contains a rule for any inbound and outbound traffic. For more information, see [Configuring ongoing data replication](vpc-peering.md#vpc-peering-ongoing-replication).

1. Verify the network access control list (ACL) and route table rules. 

1. Your database must be accessible from the VPC that you created. Add public IP addresses in VPC security groups, and allow input connections in your firewall.

1. On the **Data migrations** tab of your migration project, choose your data migration. Take a note of the **public IP address** under **Connectivity and security** on the **Details** tab. Next, allow access from the public IP address of your data migration in your source and target databases.

1. For ongoing data replication, make sure that your source and target databases can communicate with each other.

For more information, see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) in the *Amazon Virtual Private Cloud User Guide*.

## AWS DMS migrates views as tables in PostgreSQL
<a name="dm-troubleshooting-views"></a>

Homogeneous data migration doesn't support migrating views as views in PostgreSQL. For PostgreSQL, AWS DMS migrates views as tables.