

# Troubleshooting migration tasks in AWS Database Migration Service
<a name="CHAP_Troubleshooting"></a>

Following, you can find topics about troubleshooting issues with AWS Database Migration Service (AWS DMS). These topics can help you to resolve common issues using both AWS DMS and selected endpoint databases. 

If you have opened an AWS Support case, your support engineer might identify a potential issue with one of your endpoint database configurations. Your engineer might also ask you to run a support script to return diagnostic information about your database. For details about downloading, running, and uploading the diagnostic information from this type of support script, see [Working with diagnostic support scripts in AWS DMS](CHAP_SupportScripts.md).

For troubleshooting purposes, AWS DMS collects trace and dump files in the replication instance. You can provide these files to AWS Support should an issue occur requiring troubleshooting. By default, DMS purges trace and dump files that are older than thirty days. To opt out of trace and dump file collection, open a case with AWS Support. 

**Topics**
+ [Migration tasks run slowly](#CHAP_Troubleshooting.General.SlowTask)
+ [Task status bar doesn't move](#CHAP_Troubleshooting.General.StatusBar)
+ [Task completes but nothing was migrated](#CHAP_Troubleshooting.General.NothingMigrated)
+ [Foreign keys and secondary indexes are missing](#CHAP_Troubleshooting.General.MissingSecondaryObjs)
+ [AWS DMS does not create CloudWatch logs](#CHAP_Troubleshooting.General.CWL)
+ [Issues occur with connecting to Amazon RDS](#CHAP_Troubleshooting.General.RDSConnection)
+ [Networking issues occur](#CHAP_Troubleshooting.General.Network)
+ [CDC is stuck after full load](#CHAP_Troubleshooting.General.CDCStuck)
+ [Primary key violation errors occur when you restart a task](#CHAP_Troubleshooting.General.PKErrors)
+ [Initial load of a schema fails](#CHAP_Troubleshooting.General.SchemaLoadFail)
+ [Tasks fail with an unknown error](#CHAP_Troubleshooting.General.TasksFail)
+ [Task restart loads tables from the beginning](#CHAP_Troubleshooting.General.RestartLoad)
+ [Number of tables per task causes issues](#CHAP_Troubleshooting.General.TableLimit)
+ [Tasks fail when a primary key is created on a LOB column](#CHAP_Troubleshooting.General.PKLOBColumn)
+ [Duplicate records occur on a target table without a primary key](#CHAP_Troubleshooting.General.DuplicateRecords)
+ [Source endpoints fall in the reserved IP range](#CHAP_Troubleshooting.General.ReservedIP)
+ [Timestamps are garbled in Amazon Athena queries](#CHAP_Troubleshooting.General.GarbledTimestamps)
+ [Troubleshooting issues with Oracle](#CHAP_Troubleshooting.Oracle)
+ [Troubleshooting issues with MySQL](#CHAP_Troubleshooting.MySQL)
+ [Troubleshooting issues with PostgreSQL](#CHAP_Troubleshooting.PostgreSQL)
+ [Troubleshooting issues with Microsoft SQL Server](#CHAP_Troubleshooting.SQLServer)
+ [Troubleshooting issues with Amazon Redshift](#CHAP_Troubleshooting.Redshift)
+ [Troubleshooting issues with Amazon Aurora MySQL](#CHAP_Troubleshooting.Aurora)
+ [Troubleshooting issues with SAP ASE](#CHAP_Troubleshooting.SAP)
+ [Troubleshooting issues with IBM Db2](#CHAP_Troubleshooting.Db2)
+ [Table suspended a table with error "Failed to build 'where' statement"](#CHAP_Troubleshooting.table.suspended)
+ [Troubleshooting latency issues in AWS Database Migration Service](CHAP_Troubleshooting_Latency.md)
+ [Working with diagnostic support scripts in AWS DMS](CHAP_SupportScripts.md)
+ [Working with the AWS DMS diagnostic support AMI](CHAP_SupportAmi.md)

## Migration tasks run slowly
<a name="CHAP_Troubleshooting.General.SlowTask"></a>

Several issues can cause a migration task to run slowly, or cause subsequent tasks to run slower than the initial task. 

The most common reason for a migration task running slowly is that there are inadequate resources allocated to the AWS DMS replication instance. To make sure that your instance has enough resources for the tasks you are running on it, check your replication instance's use of CPU, memory, swap files, and IOPS. For example, multiple tasks with Amazon Redshift as an endpoint are I/O intensive. You can increase IOPS for your replication instance or split your tasks across multiple replication instances for a more efficient migration.

For more information about determining the size of your replication instance, see [Selecting the best size for a replication instance](CHAP_BestPractices.SizingReplicationInstance.md).

You can increase the speed of an initial migration load by doing the following:
+ If your target is an Amazon RDS DB instance, make sure that Multi-AZ isn't enabled for the target DB instance.
+ Turn off any automatic backups or logging on the target database during the load, and turn back on those features after your migration is complete.
+ If the feature is available on your target, use provisioned IOPS.
+ If your migration data contains LOBs, make sure that the task is optimized for LOB migration. For more information on optimizing for LOBs, see [Target metadata task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md).

## Task status bar doesn't move
<a name="CHAP_Troubleshooting.General.StatusBar"></a>

The task status bar gives an estimation of the task's progress. The quality of this estimate depends on the quality of the source database's table statistics; the better the table statistics, the more accurate the estimation. 

For a task with only one table that has no estimated rows statistic, AWS DMS can't provide any kind of percentage complete estimate. In this case, use the task state and the indication of rows loaded to confirm that the task is running and making progress.

## Task completes but nothing was migrated
<a name="CHAP_Troubleshooting.General.NothingMigrated"></a>

Do the following if nothing was migrated after your task has completed.
+ Check if the user that created the endpoint has read access to the table you intend to migrate.
+ Check if the object you want to migrate is a table. If it is a view, update table mappings and specify the object-locator as “view” or “all”. For more information, see [Specifying table selection and transformations rules from the console](CHAP_Tasks.CustomizingTasks.TableMapping.Console.md). 

## Foreign keys and secondary indexes are missing
<a name="CHAP_Troubleshooting.General.MissingSecondaryObjs"></a>

 AWS DMS creates tables, primary keys, and in some cases unique indexes, but it doesn't create any other objects that aren't required to efficiently migrate the data from the source. For example, it doesn't create secondary indexes, non-primary key constraints, or data defaults. 

To migrate secondary objects from your database, use the database's native tools if you are migrating to the same database engine as your source database. Use the AWS Schema Conversion Tool (AWS SCT) if you are migrating to a different database engine than that used by your source database to migrate secondary objects.

## AWS DMS does not create CloudWatch logs
<a name="CHAP_Troubleshooting.General.CWL"></a>

If your replication task doesn't create CloudWatch logs, make sure that your account has the `dms-cloudwatch-logs-role` role. If this role is not present, do the following to create it:

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Choose the **Roles** tab. Choose **Create role**.

1. In the **Select type of trusted entity** section, choose **AWS service**. 

1. In the **Choose a use case** section, choose **DMS**.

1. Choose **Next: Permissions**.

1. Enter **AmazonDMSCloudWatchLogsRole** in the search field, and check the box next to **AmazonDMSCloudWatchLogsRole**. This grants AWS DMS permissions to access CloudWatch.

1. Choose **Next: Tags**.

1. Choose **Next: Review**.

1. Enter **dms-cloudwatch-logs-role** for **Role name**. This name is case sensitive.

1. Choose **Create role**.

**Note**  
If your account is part of AWS Organizations, verify that Service Control Policies (SCPs) are not restricting your IAM role permissions. SCPs can override and limit IAM role permissions even when they are properly configured.

## Issues occur with connecting to Amazon RDS
<a name="CHAP_Troubleshooting.General.RDSConnection"></a>

There can be several reasons why you can't connect to an Amazon RDS DB instance that you set as a source or target. Some items to check follow:
+ Check that the user name and password combination is correct.
+ Check that the endpoint value shown in the Amazon RDS console for the instance is the same as the endpoint identifier you used to create the AWS DMS endpoint.
+ Check that the port value shown in the Amazon RDS console for the instance is the same as the port assigned to the AWS DMS endpoint.
+ Check that the security group assigned to the Amazon RDS DB instance allows connections from the AWS DMS replication instance.
+ If the AWS DMS replication instance and the Amazon RDS DB instance aren't in the same virtual private cloud (VPC), check that the DB instance is publicly accessible.

### Error message: Incorrect thread connection string: Incorrect thread value 0
<a name="CHAP_Troubleshooting.General.RDSConnection.ConnectionString"></a>

This error can often occur when you are testing the connection to an endpoint. This error indicates that there is an error in the connection string. An example is a space after the host IP address. Another is a bad character copied into the connection string.

## Networking issues occur
<a name="CHAP_Troubleshooting.General.Network"></a>

The most common networking issue involves the VPC security group used by the AWS DMS replication instance. By default, this security group has rules that allow egress to 0.0.0.0/0 on all ports. In many cases, you modify this security group or use your own security group. If so, at a minimum, make sure to give egress to the source and target endpoints on their respective database ports.

Other configuration-related issues can include the following:
+  **Replication instance and both source and target endpoints in the same VPC** – The security group used by the endpoints must allow ingress on the database port from the replication instance. Make sure that the security group used by the replication instance has ingress to the endpoints. Or you can create a rule in the security group used by the endpoints that allows the private IP address of the replication instance access. 
+  **Source endpoint is outside the VPC used by the replication instance (using an internet gateway)** – The VPC security group must include routing rules that send traffic that isn't for the VPC to the internet gateway. In this configuration, the connection to the endpoint appears to come from the public IP address on the replication instance. 
+  **Source endpoint is outside the VPC used by the replication instance (using a NAT gateway)** – You can configure a network address translation (NAT) gateway using a single elastic IP address bound to a single elastic network interface. This NAT gateway receives a NAT identifier (nat-\$1\$1\$1\$1\$1). 

  In some cases, the VPC includes a default route to that NAT gateway instead of the internet gateway. In such cases, the replication instance instead appears to contact the database endpoint using the public IP address of the NAT gateway. Here, the ingress to the database endpoint outside the VPC needs to allow ingress from the NAT address instead of the replication instance's public IP address. 

For information about using your own on-premises name server, see [Using your own on-premises name server](CHAP_BestPractices.md#CHAP_BestPractices.Rte53DNSResolver). 

## CDC is stuck after full load
<a name="CHAP_Troubleshooting.General.CDCStuck"></a>

Slow or stuck replication changes can occur after a full load migration when several AWS DMS settings conflict with each other. 

For example, suppose that the **Target table preparation mode** parameter is set to **Do nothing** or **Truncate**. In this case, you have instructed AWS DMS to do no setup on the target tables, including creating primary and unique indexes. If you haven't created primary or unique keys on the target tables, AWS DMS does a full table scan for each update. This approach can affect performance significantly.

## Primary key violation errors occur when you restart a task
<a name="CHAP_Troubleshooting.General.PKErrors"></a>

This error can occur when data remains in the target database from a previous migration task. If the **Target table preparation mode** option is set to **Do nothing**, AWS DMS doesn't do any preparation on the target table, including cleaning up data inserted from a previous task. 

To restart your task and avoid these errors, remove rows inserted into the target tables from the previous running of the task.

## Initial load of a schema fails
<a name="CHAP_Troubleshooting.General.SchemaLoadFail"></a>

In some cases, the initial load of your schemas might fail with an error of `Operation:getSchemaListDetails:errType=, status=0, errMessage=, errDetails=`. 

In such cases, the user account used by AWS DMS to connect to the source endpoint doesn't have the necessary permissions. 

## Tasks fail with an unknown error
<a name="CHAP_Troubleshooting.General.TasksFail"></a>

The cause of unknown types of error can be varied. However, often we find that the issue involves insufficient resources allocated to the AWS DMS replication instance. 

To make sure that your replication instance has enough resources to perform the migration, check your instance's use of CPU, memory, swap files, and IOPS. For more information on monitoring, see [AWS Database Migration Service metrics](CHAP_Monitoring.md#CHAP_Monitoring.Metrics).

## Task restart loads tables from the beginning
<a name="CHAP_Troubleshooting.General.RestartLoad"></a>

 AWS DMS restarts table loading from the beginning when it hasn't finished the initial load of a table. When a task is restarted, AWS DMS reloads tables from the beginning when the initial load didn't complete.

## Number of tables per task causes issues
<a name="CHAP_Troubleshooting.General.TableLimit"></a>

There is no set limit on the number of tables per replication task. However, we recommend limiting the number of tables in a task to less than 60,000, as a rule of thumb. Resource use can often be a bottleneck when a single task uses more than 60,000 tables. 

## Tasks fail when a primary key is created on a LOB column
<a name="CHAP_Troubleshooting.General.PKLOBColumn"></a>

In FULL LOB or LIMITED LOB mode, AWS DMS doesn't support replication of primary keys that are LOB data types. 

DMS initially migrates a row with a LOB column as null, then later updates the LOB column. So, when the primary key is created on a LOB column, the initial insert fails since the primary key can't be null. As a workaround, add another column as primary key and remove the primary key from the LOB column.

## Duplicate records occur on a target table without a primary key
<a name="CHAP_Troubleshooting.General.DuplicateRecords"></a>

Running a full load and CDC task can create duplicate records on target tables that don't have a primary key or unique index. To avoid duplicating records on target tables during full load and CDC tasks, make sure that target tables have a primary key or unique index.

## Source endpoints fall in the reserved IP range
<a name="CHAP_Troubleshooting.General.ReservedIP"></a>

If an AWS DMS source database uses an IP address within the reserved IP range of 192.168.0.0/24, the source endpoint connection test fails. The following steps provide a possible workaround:

1. Find one Amazon EC2 instance that isn't in the reserved range that can communicate to the source database at 192.168.0.0/24.

1. Install a socat proxy and run it. The following shows an example.

   ```
   yum install socat
                   
   socat -d -d -lmlocal2 tcp4-listen:database port,bind=0.0.0.0,reuseaddr,fork tcp4:source_database_ip_address:database_port
   &
   ```

Use the Amazon EC2 instance IP address and the database port given preceding for the AWS DMS endpoint. Make sure that the endpoint has the security group that allows AWS DMS to access the database port. Note that the proxy needs to be running for the duration of your DMS task execution. Depending on the use case, you may need to automate the proxy setup.

## Timestamps are garbled in Amazon Athena queries
<a name="CHAP_Troubleshooting.General.GarbledTimestamps"></a>

If timestamps are garbled in Athena queries, use the AWS Management Console or the [ModifyEndpoint](https://docs.aws.amazon.com/dms/latest/APIReference/API_ModifyEndpoint.html) action to set the `parquetTimestampInMillisecond` value for your Amazon S3 endpoint to `true`. For more information, see [S3Settings](https://docs.aws.amazon.com/dms/latest/APIReference/API_S3Settings.html).

## Troubleshooting issues with Oracle
<a name="CHAP_Troubleshooting.Oracle"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with Oracle databases.

**Topics**
+ [Pulling data from views](#CHAP_Troubleshooting.Oracle.Views)
+ [Migrating LOBs from Oracle 12c](#CHAP_Troubleshooting.Oracle.12cLOBs)
+ [Switching between Oracle LogMiner and Binary Reader](#CHAP_Troubleshooting.Oracle.LogMinerBinaryReader)
+ [Error: Oracle CDC stopped 122301 oracle CDC maximum retry counter exceeded.](#CHAP_Troubleshooting.Oracle.CDCStopped)
+ [Automatically add supplemental logging to an Oracle source endpoint](#CHAP_Troubleshooting.Oracle.AutoSupplLogging)
+ [LOB changes aren't being captured](#CHAP_Troubleshooting.Oracle.LOBChanges)
+ [Error: ORA-12899: Value too large for column *column-name*](#CHAP_Troubleshooting.Oracle.ORA12899)
+ [NUMBER data type being misinterpreted](#CHAP_Troubleshooting.Oracle.Numbers)
+ [Records missing during full load](#CHAP_Troubleshooting.Oracle.RecordsMissing)
+ [Table Error](#CHAP_Troubleshooting.Oracle.TableError)
+ [Error: Cannot retrieve Oracle archived Redo log destination ids](#CHAP_Troubleshooting.Oracle.RedoLogError)
+ [Evaluating read performance of Oracle redo or archive logs](#CHAP_Troubleshooting.Oracle.ReadPerformUtil)
+ [Failed to get LOB data](#CHAP_Troubleshooting.Oracle.LOBdata)

### Pulling data from views
<a name="CHAP_Troubleshooting.Oracle.Views"></a>

You can pull data once from a view; you can't use it for ongoing replication. To be able to extract data from views, you must add the following code to the **Endpoint settings** section of the Oracle source endpoint page. When you extract data from a view, the view is shown as a table on the target schema.

```
"ExposeViews": true
```

### Migrating LOBs from Oracle 12c
<a name="CHAP_Troubleshooting.Oracle.12cLOBs"></a>

AWS DMS can use two methods to capture changes to an Oracle database, Binary Reader and Oracle LogMiner. By default, AWS DMS uses Oracle LogMiner to capture changes. However, on Oracle 12c, Oracle LogMiner doesn't support LOB columns. To capture changes to LOB columns on Oracle 12c, use Binary Reader.

### Switching between Oracle LogMiner and Binary Reader
<a name="CHAP_Troubleshooting.Oracle.LogMinerBinaryReader"></a>

AWS DMS can use two methods to capture changes to a source Oracle database, Binary Reader and Oracle LogMiner. Oracle LogMiner is the default. To switch to using Binary Reader for capturing changes, do the following:

**To use binary reader for capturing changes**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Endpoints**.

1. Choose the Oracle source endpoint that you want to use Binary Reader.

1. Choose **Modify**.

1. Choose **Advanced**, and then add the following code for **Extra connection attributes**.

   ```
   useLogminerReader=N
   ```

1. Use an Oracle developer tool such as SQL-Plus to grant the following additional privilege to the AWS DMS user account used to connect to the Oracle endpoint.

   ```
   SELECT ON V_$TRANSPORTABLE_PLATFORM
   ```

### Error: Oracle CDC stopped 122301 oracle CDC maximum retry counter exceeded.
<a name="CHAP_Troubleshooting.Oracle.CDCStopped"></a>

This error occurs when the needed Oracle archive logs have been removed from your server before AWS DMS was able to use them to capture changes. Increase your log retention policies on your database server. For an Amazon RDS database, run the following procedure to increase log retention. For example, the following code increases log retention on an Amazon RDS DB instance to 24 hours.

```
exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24);
```

### Automatically add supplemental logging to an Oracle source endpoint
<a name="CHAP_Troubleshooting.Oracle.AutoSupplLogging"></a>

By default, AWS DMS has supplemental logging turned off. To automatically turn on supplemental logging for a source Oracle endpoint, do the following:

**To add supplemental logging to a source oracle endpoint**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Endpoints**.

1. Choose the Oracle source endpoint that you want to add supplemental logging to.

1. Choose **Modify**.

1. Choose **Advanced**, and then add the following code to the **Extra connection attributes** text box:

   ```
   addSupplementalLogging=Y
   ```

1. Choose **Modify**.

### LOB changes aren't being captured
<a name="CHAP_Troubleshooting.Oracle.LOBChanges"></a>

Currently, a table must have a primary key for AWS DMS to capture LOB changes. If a table that contains LOBs doesn't have a primary key, there are several actions you can take to capture LOB changes:
+ Add a primary key to the table. This can be as simple as adding an ID column and populating it with a sequence using a trigger.
+ Create a materialized view of the table that includes a system-generated ID as the primary key and migrate the materialized view rather than the table.
+ Create a logical standby, add a primary key to the table, and migrate from the logical standby.

### Error: ORA-12899: Value too large for column *column-name*
<a name="CHAP_Troubleshooting.Oracle.ORA12899"></a>

The error "ORA-12899: value too large for column *column-name*" is often caused by a couple of issues. 

In one of these issues, there's a mismatch in the character sets used by the source and target databases. 

In another of these issues, national language support (NLS) settings differ between the two databases. A common cause of this error is when the source database NLS\$1LENGTH\$1SEMANTICS parameter is set to CHAR and the target database NLS\$1LENGTH\$1SEMANTICS parameter is set to BYTE.

### NUMBER data type being misinterpreted
<a name="CHAP_Troubleshooting.Oracle.Numbers"></a>

The Oracle NUMBER data type is converted into various AWS DMS data types, depending on the precision and scale of NUMBER. These conversions are documented here [Source data types for Oracle](CHAP_Source.Oracle.md#CHAP_Source.Oracle.DataTypes). The way the NUMBER type is converted can also be affected by using endpoint settings for the source Oracle endpoint. These endpoint settings are documented in [Endpoint settings when using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.ConnectionAttrib).

### Records missing during full load
<a name="CHAP_Troubleshooting.Oracle.RecordsMissing"></a>

When performing a full load, AWS DMS looks for open transactions at the database level and waits for the transaction to be committed. For example, based on the task setting `TransactionConsistencyTimeout=600`, AWS DMS waits for 10 minutes even if the open transaction is on a table not included in table mapping. But if the open transaction is on a table included in table mapping, and the transaction isn't committed in time, missing records in the target table result.

You can modify the `TransactionConsistencyTimeout` task setting and increase wait time if you know that open transactions will take longer to commit.

Also, note the default value of the `FailOnTransactionConsistencyBreached` task setting is `false`. This means AWS DMS continues to apply other transactions but open transactions are missed. If you want the task to fail when open transactions aren't closed in time, you can set `FailOnTransactionConsistencyBreached` to `true`.

### Table Error
<a name="CHAP_Troubleshooting.Oracle.TableError"></a>

`Table Error` appears in table statistics during replication if a `WHERE` clause doesn't reference a primary key column, and supplemental logging isn't used for all columns. 

To fix this issue, turn on supplemental logging for all columns of the referenced table. For more information, see [Setting up supplemental logging](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Self-Managed.Configuration.SupplementalLogging).

### Error: Cannot retrieve Oracle archived Redo log destination ids
<a name="CHAP_Troubleshooting.Oracle.RedoLogError"></a>

This error occurs when your Oracle source doesn't have any archive logs generated or V\$1ARCHIVED\$1LOG is empty. You can resolve the error by switching logs manually.

For an Amazon RDS database, run the following procedure to switch log files. The `switch_logfile` procedure doesn't have parameters.

```
exec rdsadmin.rdsadmin_util.switch_logfile;
```

For a self-managed Oracle source database, use the following command to force a log switch.

```
ALTER SYSTEM SWITCH LOGFILE ;
```

### Evaluating read performance of Oracle redo or archive logs
<a name="CHAP_Troubleshooting.Oracle.ReadPerformUtil"></a>

If you experience performance issues with your Oracle source, you can evaluate the read performance of your Oracle redo or archive logs to find ways to improve performance. To test the redo or archive log read performance, use the [AWS DMS diagnostic Amazon machine image](CHAP_SupportAmi.md) (AMI).

You can use the AWS DMS diagnostic AMI to do the following:
+ Use the bFile method to evaluate redo log file performance.
+ Use the LogMiner method to evaluate redo log file performance.
+ Use the PL/SQL (`dbms_lob.read`) method to evaluate redo log file performance.
+ Use Single-thread to evaluate read performance on ASMFile.
+ Use Multi-threads to evaluate read performance on ASMFile.
+ Use Direct OS Readfile() Windows or Pread64 Linux function to evaluate the redo log file.

Then you can take remedial steps based upon the results.

**To test read performance on an Oracle redo or archive log file**

1. Create an AWS DMS diagnostic AMI Amazon EC2 instance and connect to it.

   For more information see, [Working with the AWS DMS diagnostic AMI](CHAP_SupportAmi.md).

1. Run the **awsreplperf** command.

   ```
   $ awsreplperf
   ```

   The command displays the AWS DMS Oracle Read Performance Utility options.

   ```
   0.	Quit
   1.	Read using Bfile
   2.	Read using LogMiner
   3.	Read file PL/SQL (dms_lob.read)
   4.	Read ASMFile Single Thread
   5.	Read ASMFile Multi Thread
   6.	Readfile() function
   ```

1. Select an option from the list.

1. Enter the following database connection and archive log information.

   ```
   Oracle user name [system]:
   Oracle password:
   
   Oracle connection name [orcllx]:
   Connection format hostname:port/instance
   
   Oracle event trace? [N]: 
   Default N = No or Y = Yes
   
   Path to redo or archive log file []:
   ```

1. Examine the output displayed for relevant read performance information. For example, the following shows output that can result from selecting option number 2, **Read using LogMiner**.  
![\[read performance utility output\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-oracle-read-perf-util.png)

1. To exit the utility, enter **0** (zero).

**Next steps**
+ When results show that read speed is below an acceptable threshold, run the [Oracle diagnostic support script](CHAP_SupportScripts.Oracle.md) on the endpoint, review Wait Time, Load Profile, and IO Profile sections. Then adjust any abnormal configuration that might improve read performance. For example, if your redo log files are up to 2 GB, try increasing LOG\$1BUFFER to 200 MB to help improve performance.
+ Review [AWS DMS Best Practices](CHAP_BestPractices.md) to make sure your DMS replication instance, task, and endpoints are configured optimally. 

### Failed to get LOB data
<a name="CHAP_Troubleshooting.Oracle.LOBdata"></a>

LOB (Large Object) lookup failures in AWS DMS occur under specific circumstances during data migration processes. During the full load phase, AWS DMS employs the lookup method for LOB data migration when the task is configured for FULL LOB mode. Notably, during the CDC (Change Data Capture) phase, AWS DMS consistently uses the Lookup method regardless of LOB settings.

AWS DMS first replicates rows without the LOB column, retrieves LOB data using a **SELECT** command, and executes an **UPDATE** command to replicate the LOB field on the target. This sequential INSERT and UPDATE operation characterizes the LOOKUP behavior. The LOB lookup during the CDC phase is not universally applicable across all database engines and depending on data size the tasks may replicate inline rows along with column data.

LOB Lookup process failure is a common issue that can occur during migration displaying the "Failed to get lob data, going to set to null" error message. During this failure the partial table data on the target, particularly the LOB columns appear as NULL values. Various factors can trigger these failures:
+ Source row deletion occurring before DMS completes the Lookup process
+ Intermittent connectivity issues disrupting lookup threads
+ DMS lookup queries entering a waiting state due to source table locking scenarios

To address these LOB Lookup failures, you can do the following:
+ Implement Limited LOB settings during the full load phase to eliminate lookup behavior and enhance performance.
+ Reload affected tables when encountering lookup failure messages and partial data on the target.
+ For issues occuring due to intermittent network or source database availability problems, restart the task to resolve all table data inconsistencies.

These steps to handling LOB Lookup failures ensures more reliable data migration and helps maintain data integrity throughout the process.

## Troubleshooting issues with MySQL
<a name="CHAP_Troubleshooting.MySQL"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with MySQL databases.

**Topics**
+ [CDC task failing for Amazon RDS DB instance endpoint because binary logging disabled](#CHAP_Troubleshooting.MySQL.CDCTaskFail)
+ [Connections to a target MySQL instance are disconnected during a task](#CHAP_Troubleshooting.MySQL.ConnectionDisconnect)
+ [Adding autocommit to a MySQL-compatible endpoint](#CHAP_Troubleshooting.MySQL.Autocommit)
+ [Disable foreign keys on a target MySQL-compatible endpoint](#CHAP_Troubleshooting.MySQL.DisableForeignKeys)
+ [Characters replaced with question mark](#CHAP_Troubleshooting.MySQL.CharacterReplacement)
+ ["Bad event" log entries](#CHAP_Troubleshooting.MySQL.BadEvent)
+ [Change data capture with MySQL 5.5](#CHAP_Troubleshooting.MySQL.MySQL55CDC)
+ [Increasing binary log retention for Amazon RDS DB instances](#CHAP_Troubleshooting.MySQL.BinLogRetention)
+ [Log message: Some changes from the source database had no impact when applied to the target database.](#CHAP_Troubleshooting.MySQL.NoImpact)
+ [Error: Identifier too long](#CHAP_Troubleshooting.MySQL.IDTooLong)
+ [Error: Unsupported character set causes field data conversion to fail](#CHAP_Troubleshooting.MySQL.UnsupportedCharacterSet)
+ [Error: Codepage 1252 to UTF8 [120112] a field data conversion failed](#CHAP_Troubleshooting.MySQL.DataConversionFailed)
+ [Indexes, Foreign Keys, or Cascade Updates or Deletes Not Migrated](#CHAP_Troubleshooting.MySQL.FKsAndIndexes)

### CDC task failing for Amazon RDS DB instance endpoint because binary logging disabled
<a name="CHAP_Troubleshooting.MySQL.CDCTaskFail"></a>

This issue occurs with Amazon RDS DB instances because automated backups are disabled. Enable automatic backups by setting the backup retention period to a non-zero value.

### Connections to a target MySQL instance are disconnected during a task
<a name="CHAP_Troubleshooting.MySQL.ConnectionDisconnect"></a>

If you have a task with LOBs that is getting disconnected from a MySQL target, you might see the following type of errors in the task log. 

```
[TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: 08S01 NativeError: 
2013 Message: [MySQL][ODBC 5.3(w) Driver][mysqld-5.7.16-log]Lost connection 
to MySQL server during query [122502] ODBC general error.
```

```
 [TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 
2006 Message: [MySQL][ODBC 5.3(w) Driver]MySQL server has gone away 
[122502] ODBC general error.
```

In this case, you might need to adjust some of your task settings.

To solve the issue where a task is being disconnected from a MySQL target, do the following:
+ Check that you have your database variable `max_allowed_packet` set large enough to hold your largest LOB.
+ Check that you have the following variables set to have a large timeout value. We suggest you use a value of at least 5 minutes for each of these variables.
  + `net_read_timeout` 
  + `net_write_timeout` 
  + `wait_timeout` 

For information about setting MySQL system variables, see [Server System Variables](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html) in the [MySQL documentation](https://dev.mysql.com/).

### Adding autocommit to a MySQL-compatible endpoint
<a name="CHAP_Troubleshooting.MySQL.Autocommit"></a>



**To add autocommit to a target MySQL-compatible endpoint**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Endpoints**.

1. Choose the MySQL-compatible target endpoint that you want to add autocommit to.

1. Choose **Modify**.

1. Choose **Advanced**, and then add the following code to the **Extra connection attributes** text box:

   ```
   Initstmt= SET AUTOCOMMIT=1
   ```

1. Choose **Modify**.

### Disable foreign keys on a target MySQL-compatible endpoint
<a name="CHAP_Troubleshooting.MySQL.DisableForeignKeys"></a>

You can disable foreign key checks on MySQL by adding the following to the **Extra Connection Attributes** in the **Advanced** section of the target MySQL, Amazon Aurora MySQL-Compatible Edition, or MariaDB endpoint.

**To disable foreign keys on a target MySQL-compatible endpoint**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Endpoints**.

1. Choose the MySQL, Aurora MySQL, or MariaDB target endpoint that you want to disable foreign keys.

1. Choose **Modify**.

1. Choose **Advanced**, and then add the following code to the **Extra connection attributes** text box:

   ```
   Initstmt=SET FOREIGN_KEY_CHECKS=0
   ```

1. Choose **Modify**.

### Characters replaced with question mark
<a name="CHAP_Troubleshooting.MySQL.CharacterReplacement"></a>

The most common situation that causes this issue is when the source endpoint characters have been encoded by a character set that AWS DMS doesn't support.

### "Bad event" log entries
<a name="CHAP_Troubleshooting.MySQL.BadEvent"></a>

"Bad event" entries in the migration logs usually indicate that an unsupported data definition language (DDL) operation was attempted on the source database endpoint. Unsupported DDL operations cause an event that the replication instance can't skip, so a bad event is logged. 

To fix this issue, restart the task from the beginning. Doing this reloads the tables and starts capturing changes at a point after the unsupported DDL operation was issued.

### Change data capture with MySQL 5.5
<a name="CHAP_Troubleshooting.MySQL.MySQL55CDC"></a>

AWS DMS change data capture (CDC) for Amazon RDS MySQL-compatible databases requires full image row-based binary logging, which isn't supported in MySQL version 5.5 or lower. To use AWS DMS CDC, you must up upgrade your Amazon RDS DB instance to MySQL version 5.6.

### Increasing binary log retention for Amazon RDS DB instances
<a name="CHAP_Troubleshooting.MySQL.BinLogRetention"></a>

AWS DMS requires the retention of binary log files for change data capture. To increase log retention on an Amazon RDS DB instance, use the following procedure. The following example increases the binary log retention to 24 hours. 

```
call mysql.rds_set_configuration('binlog retention hours', 24);
```

### Log message: Some changes from the source database had no impact when applied to the target database.
<a name="CHAP_Troubleshooting.MySQL.NoImpact"></a>

When AWS DMS updates a MySQL database column's value to its existing value, a message of `zero rows affected` is returned from MySQL. This behavior is unlike other database engines such as Oracle and SQL Server. These engines update one row, even when the replacing value is the same as the current one. 

### Error: Identifier too long
<a name="CHAP_Troubleshooting.MySQL.IDTooLong"></a>

The following error occurs when an identifier is too long:

```
TARGET_LOAD E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 
1059 Message: MySQLhttp://ODBC 5.3(w) Driverhttp://mysqld-5.6.10Identifier 
name 'name' is too long 122502 ODBC general error. (ar_odbc_stmt.c:4054)
```

In some cases, you set AWS DMS to create the tables and primary keys in the target database. In these cases, DMS currently doesn't use the same names for the primary keys that were used in the source database. Instead, DMS creates the primary key name based on the table name. When the table name is long, the autogenerated identifier created can be longer than the allowed limits for MySQL. 

To solve this issue, the current approach is to first precreate the tables and primary keys in the target database. Then use a task with the task setting **Target table preparation mode** set to **Do nothing** or **Truncate** to populate the target tables.

### Error: Unsupported character set causes field data conversion to fail
<a name="CHAP_Troubleshooting.MySQL.UnsupportedCharacterSet"></a>

The following error occurs when an unsupported character set causes a field data conversion to fail:

```
"[SOURCE_CAPTURE ]E: Column 'column-name' uses an unsupported character set [120112] 
A field data conversion failed. (mysql_endpoint_capture.c:2154)
```

Check your database's parameters related to connections. The following command can be used to set these parameters.

```
SHOW VARIABLES LIKE '%char%';
```

### Error: Codepage 1252 to UTF8 [120112] a field data conversion failed
<a name="CHAP_Troubleshooting.MySQL.DataConversionFailed"></a>

 The following error can occur during a migration if you have non codepage-1252 characters in the source MySQL database.

```
  
[SOURCE_CAPTURE ]E: Error converting column 'column_xyz' in table
'table_xyz with codepage 1252 to UTF8 [120112] A field data conversion failed. 
(mysql_endpoint_capture.c:2248)
```

 As a workaround, you can use the `CharsetMapping` extra connection attribute with your source MySQL endpoint to specify character set mapping. You might need to restart the AWS DMS migration task from the beginning if you add this endpoint setting. 

For example, the following endpoint setting could be used for a MySQL source endpoint where the source character set is `Utf8` or `latin1`. 65001 is the UTF8 code page identifier. 

```
   
CharsetMapping=utf8,65001
CharsetMapping=latin1,65001
```

### Indexes, Foreign Keys, or Cascade Updates or Deletes Not Migrated
<a name="CHAP_Troubleshooting.MySQL.FKsAndIndexes"></a>

AWS DMS does not support migrating secondary objects such as indexes and foreign keys. To replicate changes made to child tables from a cascade update or delete operation, you need to have the triggering foreign key constraint active on the target table. To work around this limitation, create the foreign key manually on the target table. Then, either create a single task for full-load and CDC, or two separate tasks for full load and CDC, as described following:

#### Create a single task supporting full load and CDC
<a name="CHAP_Troubleshooting.MySQL.FKsAndIndexes.FullLoadPlusCDC"></a>

This procedure describes how to migrate foreign keys and indexes using a single task for full load and CDC.

**Create a full load and CDC task**

1. Manually create the tables with foreign keys and indexes on the target to match the source tables.

1. Add the following ECA to the target AWS DMS endpoint:

   ```
   Initstmt=SET FOREIGN_KEY_CHECKS=0;
   ```

1. Create the AWS DMS task with `TargetTablePrepMode` set to `DO_NOTHING`.

1. Set the `Stop task after full load completes` setting to `StopTaskCachedChangesApplied`.

1. Start the task. AWS DMS stops the task automatically after it completes the full load and applies any cached changes.

1. Remove the `SET FOREIGN_KEY_CHECKS` ECA you added previously.

1. Resume the task. The task enters the CDC phase and applies ongoing changes from the source database to the target.

#### Create full load and CDC tasks separately
<a name="CHAP_Troubleshooting.MySQL.FKsAndIndexes.Separate"></a>

These procedures describe how to migrate foreign keys and indexes using separate tasks for full load and CDC.

**Create a full load task**

1. Manually create the tables with foreign keys and indexes on the target to match the source tables.

1. Add the following ECA to the target AWS DMS endpoint:

   ```
   Initstmt=SET FOREIGN_KEY_CHECKS=0;
   ```

1. Create the AWS DMS task with the `TargetTablePrepMode` parameter set to `DO_NOTHING` and `EnableValidation` set to `FALSE`.

1. Start the task. AWS DMS stops the task automatically after it completes the full load and applies any cached changes.

1. Once the task completes, note the full load task start time in UTC, or binary log file name and position, to start the CDC only task. Refer to the logs to get the timestamp in UTC from the initial full load start time.

**Create a CDC-only task**

1. Remove the `SET FOREIGN_KEY_CHECKS` ECA you set previously.

1. Create the CDC-only task with the start position set to the full load start time noted in the previous step. Alternatively, you can use the binary log position recorded in the previous step. Set the `TargetTablePrepMode` setting to `DO_NOTHING`. Enable data validation by setting the `EnableValidation` setting to `TRUE` if needed.

1. Start the CDC-only task, and monitor the logs for errors.

**Note**  
This workaround only applies to a MySQL to MySQL migration. You can't use this method with the Batch Apply feature, because Batch Apply requires that target tables don't have active foreign keys.

## Troubleshooting issues with PostgreSQL
<a name="CHAP_Troubleshooting.PostgreSQL"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with PostgreSQL databases.

**Topics**
+ [JSON data types being truncated](#CHAP_Troubleshooting.PostgreSQL.JSONTruncation)
+ [Columns of a user-defined data type not being migrated correctly](#CHAP_Troubleshooting.PostgreSQL.UserDefinedDataType)
+ [Error: No schema has been selected to create in](#CHAP_Troubleshooting.PostgreSQL.NoSchema)
+ [Deletes and updates to a table aren't being replicated using CDC](#CHAP_Troubleshooting.PostgreSQL.DeletesNotReplicated)
+ [Truncate statements aren't being propagated](#CHAP_Troubleshooting.PostgreSQL.Truncate)
+ [Preventing PostgreSQL from capturing DDL](#CHAP_Troubleshooting.PostgreSQL.NoCaptureDDL)
+ [Selecting the schema where database objects for capturing DDL are created](#CHAP_Troubleshooting.PostgreSQL.SchemaDDL)
+ [Oracle tables missing after migrating to PostgreSQL](#CHAP_Troubleshooting.PostgreSQL.OracleTablesMissing)
+ [ReplicationSlotDiskUsage increases and restart\$1lsn stops moving forward during long transactions, such as ETL workloads](#CHAP_Troubleshooting.PostgreSQL.AvoidLongTransactions)
+ [Task using view as a source has no rows copied](#CHAP_Troubleshooting.PostgreSQL.ViewTask)
+ [Invalid byte sequence for encoding "UTF8"](#CHAP_Troubleshooting.PostgreSQL.invalidbyte)

### JSON data types being truncated
<a name="CHAP_Troubleshooting.PostgreSQL.JSONTruncation"></a>

 AWS DMS treats the JSON data type in PostgreSQL as an LOB data type column. This means that the LOB size limitation when you use limited LOB mode applies to JSON data. 

For example, suppose that limited LOB mode is set to 4,096 KB. In this case, any JSON data larger than 4,096 KB is truncated at the 4,096 KB limit and fails the validation test in PostgreSQL.

The following log information shows JSON that was truncated due to the limited LOB mode setting and failed validation.

```
03:00:49
2017-09-19T03:00:49 [TARGET_APPLY ]E: Failed to execute statement: 
  'UPDATE "public"."delivery_options_quotes" SET "id"=? , "enabled"=? , 
  "new_cart_id"=? , "order_id"=? , "user_id"=? , "zone_id"=? , "quotes"=? , 
  "start_at"=? , "end_at"=? , "last_quoted_at"=? , "created_at"=? , 
  "updated_at"=? WHERE "id"=? ' [1022502] (ar_odbc_stmt
2017-09-19T03:00:49 [TARGET_APPLY ]E: Failed to execute statement: 
  'UPDATE "public"."delivery_options_quotes" SET "id"=? , "enabled"=? , 
  "new_cart_id"=? , "order_id"=? , "user_id"=? , "zone_id"=? , "quotes"=? , 
  "start_at"=? , "end_at"=? , "last_quoted_at"=? , "created_at"=? , 
  "updated_at"=? WHERE "id"=? ' [1022502] (ar_odbc_stmt.c:2415)

03:00:49
2017-09-19T03:00:49 [TARGET_APPLY ]E: RetCode: SQL_ERROR SqlState: 
  22P02 NativeError: 1 Message: ERROR: invalid input syntax for type json;, 
  Error while executing the query [1022502] (ar_odbc_stmt.c:2421)
2017-09-19T03:00:49 [TARGET_APPLY ]E: RetCode: SQL_ERROR SqlState: 
  22P02 NativeError: 1 Message: ERROR: invalid input syntax for type json;, 
  Error while executing the query [1022502] (ar_odbc_stmt.c:2421)
```

### Columns of a user-defined data type not being migrated correctly
<a name="CHAP_Troubleshooting.PostgreSQL.UserDefinedDataType"></a>

When replicating from a PostgreSQL source, AWS DMS creates the target table with the same data types for all columns, apart from columns with user-defined data types. In such cases, the data type is created as "character varying" in the target. 

### Error: No schema has been selected to create in
<a name="CHAP_Troubleshooting.PostgreSQL.NoSchema"></a>

In some case, you might see the error "SQL\$1ERROR SqlState: 3F000 NativeError: 7 Message: ERROR: no schema has been selected to create in". 

This error can occur when your JSON table mapping contains a wildcard value for the schema but the source database doesn't support that value.

### Deletes and updates to a table aren't being replicated using CDC
<a name="CHAP_Troubleshooting.PostgreSQL.DeletesNotReplicated"></a>

Delete and update operations during change data capture (CDC) are ignored if the source table doesn't have a primary key. AWS DMS supports change data capture (CDC) for PostgreSQL tables with primary keys. 

If a table doesn't have a primary key, the write-ahead (WAL) logs don't include a before image of the database row. In this case, AWS DMS can't update the table. For delete operations to be replicated, create a primary key on the source table.

### Truncate statements aren't being propagated
<a name="CHAP_Troubleshooting.PostgreSQL.Truncate"></a>

When using change data capture (CDC), TRUNCATE operations aren't supported by AWS DMS.

### Preventing PostgreSQL from capturing DDL
<a name="CHAP_Troubleshooting.PostgreSQL.NoCaptureDDL"></a>

You can prevent a PostgreSQL target endpoint from capturing DDL statements by adding the following **Endpoint setting** statement.

```
"CaptureDDLs": "N"
```

### Selecting the schema where database objects for capturing DDL are created
<a name="CHAP_Troubleshooting.PostgreSQL.SchemaDDL"></a>

You can control what schema the database objects related to capturing DDL are created in. Add the following **Endpoint setting** statement. The **Endpoint setting** parameter is available in the tab of the source endpoint.

```
"DdlArtifactsSchema: "xyzddlschema"                
```

### Oracle tables missing after migrating to PostgreSQL
<a name="CHAP_Troubleshooting.PostgreSQL.OracleTablesMissing"></a>

In this case, your tables and data are generally still accessible. 

Oracle defaults to uppercase table names, and PostgreSQL defaults to lowercase table names. When you perform a migration from Oracle to PostgreSQL, we suggest that you supply certain transformation rules under your task's table-mapping section. These are transformation rules to convert the case of your table names.

If you migrated your tables without using transformation rules to convert the case of your table names, enclose your table names in quotation marks when referencing them.

### ReplicationSlotDiskUsage increases and restart\$1lsn stops moving forward during long transactions, such as ETL workloads
<a name="CHAP_Troubleshooting.PostgreSQL.AvoidLongTransactions"></a>

When logical replication is enabled, the maximum number of changes kept in memory per transaction is 4MB. After that, changes are spilled to disk. As a result `ReplicationSlotDiskUsage` increases, and `restart_lsn` doesn’t advance until the transaction is completed/aborted and the rollback finishes. Since it is a long transaction, it can take a long time to rollback.

So, avoid long running transactions when logical replication is enabled. Instead, try to break the transaction into several smaller transactions.

### Task using view as a source has no rows copied
<a name="CHAP_Troubleshooting.PostgreSQL.ViewTask"></a>

To migrate a view, set `table-type` to `all` or `view`. For more information, see [Specifying table selection and transformations rules from the console](CHAP_Tasks.CustomizingTasks.TableMapping.Console.md). 

Sources that support views include the following.
+ Oracle
+ Microsoft SQL Server
+ MySQL
+ PostgreSQL
+ IBM Db2 LUW
+ SAP Adaptive Server Enterprise (ASE)

### Invalid byte sequence for encoding "UTF8"
<a name="CHAP_Troubleshooting.PostgreSQL.invalidbyte"></a>

Data migration from Oracle to PostgreSQL using AWS DMS presents unique challenges due to character set encoding differences between the two databases. A significant issue arises from Oracle's AL32UTF8 character set, which fully supports 4-byte characters, while PostgreSQL's UTF8 character set implementation lacks this capability. This disparity often leads to migration failures, particularly when dealing with tables or columns in the Oracle source that contain 4-byte characters.

During migration attempts, you may encounter error messages in both DMS task logs and PostgreSQL target database logs indicating problems with invalid UTF8 byte sequences. A typical error message "ERROR: invalid byte sequence for encoding "UTF8": 0xed 0xb0 0x86" is displayed. To resolve this issue, AWS DMS provides a solution through the "`ReplaceChars`" settings. It automatically sibtitutes or eliminates invalid characters during the migration process. This approach effectively prevents encoding-related errors without requiring modifications to the source data.

For more information, see *Character set validation and replacement* bullet point in [Character substitution task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.CharacterSubstitution.html) topic.

## Troubleshooting issues with Microsoft SQL Server
<a name="CHAP_Troubleshooting.SQLServer"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with Microsoft SQL Server databases.

**Topics**
+ [Ongoing replication fails after RDS for SQL Server fails over to secondary](#CHAP_Troubleshooting.SQLServer.RepFailover)
+ [Errors capturing changes for SQL server database](#CHAP_Troubleshooting.SQLServer.CDCErrors)
+ [Missing identity columns](#CHAP_Troubleshooting.SQLServer.IdentityColumns)
+ [Error: SQL Server doesn't support publications](#CHAP_Troubleshooting.SQLServer.Publications)
+ [Changes do not appear in your target](#CHAP_Troubleshooting.SQLServer.NoChanges)
+ [Non-uniform table mapped across partitions](#CHAP_Troubleshooting.SQLServer.Nonuniform)
+ [Error : CDC task failed with a Bad Envelope, invalid data context/LCX code while processing a transaction](#CHAP_Troubleshooting.SQLServer.badenvelope.cdc)

### Ongoing replication fails after RDS for SQL Server fails over to secondary
<a name="CHAP_Troubleshooting.SQLServer.RepFailover"></a>

If a source SQL Server instance fails over to the secondary, AWS DMS ongoing replication keeps trying to connect, and continues replicating once the source is back online. However, for RDS for SQL Server MAZ instances, under certain circumstances the secondary database owner can be set to `NT AUTHORITY\SYSTEM`. After a failover, this causes the DMS task to fail with the following error:

```
[SOURCE_CAPTURE  ]E:  RetCode: SQL_ERROR  SqlState: 42000 NativeError: 33009 Message: 
                [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The database owner SID recorded in the master 
                database differs from the database owner SID recorded in database 'rdsadmin'. You should correct 
                this situation by resetting the owner of database 'rdsadmin' using the ALTER AUTHORIZATION statement. 
                Line: 1 Column: -1 [1022502]  (ar_odbc_stmt.c:5035)
```

To fix this, follow the steps in [ Changing the db\$1owner to the rdsa account for your database](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.ChangeDBowner.html), and then resume your DMS task.

### Errors capturing changes for SQL server database
<a name="CHAP_Troubleshooting.SQLServer.CDCErrors"></a>

Errors during change data capture (CDC) can often indicate that one of the prerequisites wasn't met. For example, the most common overlooked prerequisite is a full database backup. The task log indicates this omission with the following error:

```
SOURCE_CAPTURE E: No FULL database backup found (under the 'FULL' recovery model). 
To enable all changes to be captured, you must perform a full database backup. 
120438 Changes may be missed. (sqlserver_log_queries.c:2623)
```

Review the prerequisites listed for using SQL Server as a source in [Using a Microsoft SQL Server database as a source for AWS DMS](CHAP_Source.SQLServer.md).

### Missing identity columns
<a name="CHAP_Troubleshooting.SQLServer.IdentityColumns"></a>

AWS DMS doesn't support identity columns when you create a target schema. You must add them after the initial load has completed.

### Error: SQL Server doesn't support publications
<a name="CHAP_Troubleshooting.SQLServer.Publications"></a>

The following error is generated when you use SQL Server Express as a source endpoint:

```
RetCode: SQL_ERROR SqlState: HY000 NativeError: 21106 
Message: This edition of SQL Server does not support publications.
```

AWS DMS currently doesn't support SQL Server Express as a source or target.

### Changes do not appear in your target
<a name="CHAP_Troubleshooting.SQLServer.NoChanges"></a>

AWS DMS requires that a source SQL Server database be in either 'FULL' or 'BULK LOGGED' data recovery model in order to consistently capture changes. The 'SIMPLE' model isn't supported. 

The SIMPLE recovery model logs the minimal information needed to allow users to recover their database. All inactive log entries are automatically truncated when a checkpoint occurs. 

All operations are still logged. However, as soon as a checkpoint occurs the log is automatically truncated. This truncation means that the log becomes available for reuse and older log entries can be overwritten. When log entries are overwritten, changes can't be captured. This issue is why AWS DMS does not support the SIMPLE data recovery model. For information on other required prerequisites for using SQL Server as a source, see [Using a Microsoft SQL Server database as a source for AWS DMS](CHAP_Source.SQLServer.md).

### Non-uniform table mapped across partitions
<a name="CHAP_Troubleshooting.SQLServer.Nonuniform"></a>

During change data capture (CDC), migration of a table with a specialized structure is suspended when AWS DMS can't properly perform CDC on the table. Messages like these are issued:

```
[SOURCE_CAPTURE ]W: Table is not uniformly mapped across partitions. Therefore - it is excluded from CDC (sqlserver_log_metadata.c:1415)
[SOURCE_CAPTURE ]I: Table has been mapped and registered for CDC. (sqlserver_log_metadata.c:835)
```

When running CDC on SQL Server tables, AWS DMS parses the SQL Server tlogs. On each tlog record, AWS DMS parses hexadecimal values containing data for columns that were inserted, updated, or deleted during a change. 

To parse the hexadecimal record, AWS DMS reads the table metadata from the SQL Server system tables. Those system tables identify what the specially structured table columns are and reveal some of their internal properties, such as "xoffset" and "null bit position". 

AWS DMS expects that metadata to be the same for all raw partitions of the table. But in some cases, specially structured tables don't have the same metadata on all of their partitions. In these cases, AWS DMS can suspend CDC on that table to avoid parsing changes incorrectly and providing the target with incorrect data. Workarounds include the following:
+ If the table has a clustered index, perform an index rebuild.
+ If the table does not have a clustered index, add a clustered index to the table (you can drop it later if you want).

### Error : CDC task failed with a Bad Envelope, invalid data context/LCX code while processing a transaction
<a name="CHAP_Troubleshooting.SQLServer.badenvelope.cdc"></a>

The 'Bad Envelope' error occurs when AWS DMS is unable to validate specific event types in CDC phase replication during the validation process. This error can commonly occur when resuming tasks from a specific timestamp that falls in the middle of a transaction. In such cases, the task might read a commit event without finding its corresponding 'start transaction' event, leading to an invalid transaction context, and triggering the 'Bad Envelope' error. 

To resolve this issue, you must modify the SQL Server source endpoint configuration by setting the `ignoreTxnCtxValidityCheck` parameter to `true` in the Extra Connection Attribute section, before resuming the task. If the error persists after implementing this solution, submit an AWS [support ticket](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html).

## Troubleshooting issues with Amazon Redshift
<a name="CHAP_Troubleshooting.Redshift"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with Amazon Redshift databases.

**Topics**
+ [Loading in to an Amazon Redshift cluster in a different AWS Region](#CHAP_Troubleshooting.Redshift.Regions)
+ [Error: Relation "awsdms\$1apply\$1exceptions" already exists](#CHAP_Troubleshooting.Redshift.AlreadyExists)
+ [Errors with tables whose name begins with "awsdms\$1changes"](#CHAP_Troubleshooting.Redshift.Changes)
+ [Seeing tables in clusters with names like dms.awsdms\$1changes000000000XXXX](#CHAP_Troubleshooting.Redshift.TempTables)
+ [Permissions required to work with Amazon Redshift](#CHAP_Troubleshooting.Redshift.Permissions)

### Loading in to an Amazon Redshift cluster in a different AWS Region
<a name="CHAP_Troubleshooting.Redshift.Regions"></a>

You can't load into an Amazon Redshift cluster in a different AWS Region than your AWS DMS replication instance. DMS requires that your replication instance and your Amazon Redshift cluster be in the same Region.

### Error: Relation "awsdms\$1apply\$1exceptions" already exists
<a name="CHAP_Troubleshooting.Redshift.AlreadyExists"></a>

The error "Relation 'awsdms\$1apply\$1exceptions' already exists" often occurs when a Redshift endpoint is specified as a PostgreSQL endpoint. To fix this issue, modify the endpoint and change the **Target engine** to "redshift."

### Errors with tables whose name begins with "awsdms\$1changes"
<a name="CHAP_Troubleshooting.Redshift.Changes"></a>

Table error messages with names that begin with "awsdms\$1changes" can occur when two tasks trying to load data into the same Amazon Redshift cluster run concurrently. Due to the way temporary tables are named, concurrent tasks can conflict when updating the same table.

### Seeing tables in clusters with names like dms.awsdms\$1changes000000000XXXX
<a name="CHAP_Troubleshooting.Redshift.TempTables"></a>

AWS DMS creates temporary tables when data is being loaded from files stored in Amazon S3. The names of these temporary tables each have the prefix `dms.awsdms_changes`. These tables are required so AWS DMS can store data when it is first loaded and before it is placed in its final target table.

### Permissions required to work with Amazon Redshift
<a name="CHAP_Troubleshooting.Redshift.Permissions"></a>

To use AWS DMS with Amazon Redshift, the user account that you use to access Amazon Redshift must have the following permissions:
+ CRUD (Choose, Insert, Update, Delete) 
+ Bulk load
+ Create, alter, drop (if required by the task's definition)

To see the prerequisites required for using Amazon Redshift as a target, see [Using an Amazon Redshift database as a target for AWS Database Migration Service](CHAP_Target.Redshift.md).

## Troubleshooting issues with Amazon Aurora MySQL
<a name="CHAP_Troubleshooting.Aurora"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with Amazon Aurora MySQL databases.

**Topics**
+ [Error: CHARACTER SET UTF8 fields terminated by ',' enclosed by '"' lines terminated by '\$1n'](#CHAP_Troubleshooting.Aurora.ANSIQuotes)

### Error: CHARACTER SET UTF8 fields terminated by ',' enclosed by '"' lines terminated by '\$1n'
<a name="CHAP_Troubleshooting.Aurora.ANSIQuotes"></a>

If you are using Amazon Aurora MySQL as a target, you might see an error like the following in the logs. This type of error usually indicates that you have ANSI\$1QUOTES as part of the SQL\$1MODE parameter. Having ANSI\$1QUOTES as part of the SQL\$1MODE parameter causes double quotation marks to be handled like quotation marks and can create issues when you run a task. 

To fix this error, remove ANSI\$1QUOTES from the SQL\$1MODE parameter.

```
2016-11-02T14:23:48 [TARGET_LOAD ]E: Load data sql statement. load data local infile 
"/rdsdbdata/data/tasks/7XO4FJHCVON7TYTLQ6RX3CQHDU/data_files/4/LOAD000001DF.csv" into table 
`VOSPUSER`.`SANDBOX_SRC_FILE` CHARACTER SET UTF8 fields terminated by ',' 
enclosed by '"' lines terminated by '\n'( `SANDBOX_SRC_FILE_ID`,`SANDBOX_ID`,
`FILENAME`,`LOCAL_PATH`,`LINES_OF_CODE`,`INSERT_TS`,`MODIFIED_TS`,`MODIFIED_BY`,
`RECORD_VER`,`REF_GUID`,`PLATFORM_GENERATED`,`ANALYSIS_TYPE`,`SANITIZED`,`DYN_TYPE`,
`CRAWL_STATUS`,`ORIG_EXEC_UNIT_VER_ID` ) ; (provider_syntax_manager.c:2561)
```

## Troubleshooting issues with SAP ASE
<a name="CHAP_Troubleshooting.SAP"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with SAP ASE databases.

### Error: LOB columns have NULL values when source has a composite unique index with NULL values
<a name="CHAP_Troubleshooting.SAP"></a>

When using SAP ASE as a source with tables configured with a composite unique index that allows NULL values, LOB values might not migrate during ongoing replication. This behavior is usually the result of ANSI\$1NULL set to 1 by default on the DMS replication instance client.

To ensure that LOB fields migrate correctly, include the Endpoint setting `'AnsiNull=0'` to the AWS DMS source endpoint for the task.

## Troubleshooting issues with IBM Db2
<a name="CHAP_Troubleshooting.Db2"></a>

Following, you can learn about troubleshooting issues specific to using AWS DMS with IBM Db2 databases.

### Error: Resume from timestamp is not supported Task
<a name="CHAP_Troubleshooting.Db2.timestamp"></a>

For ongoing replication (CDC), if you plan to start replication from a specific timestamp, set the extra connection attribute `StartFromContext` to the required timestamp. For more information, see [Extra Connection Attributes (ECAs) when using Db2 LUW](CHAP_Source.DB2.md#CHAP_Source.DB2.ConnectionAttrib). Setting `StartFromContext` to the required timestamp prevents the following issue:

```
Last Error Resume from timestamp is not supported Task error notification received from 
subtask 0, thread 0 [reptask/replicationtask.c:2822] [1020455] 'Start from timestamp' was blocked to prevent Replicate from 
scanning the log (to find the timestamp). When using IBM DB2 for LUW, 'Start from timestamp' is only supported if an actual 
change was captured by this Replicate task earlier to the specified timestamp.
```

## Table suspended a table with error "Failed to build 'where' statement"
<a name="CHAP_Troubleshooting.table.suspended"></a>

In DMS when you try to update a record in a table that does not have a primary key, the system is unable to construct a WHERE condition and displays the following error:

```
 [TARGET_APPLY ]E: Failed to build 'where' statement
```

This can occur due to multiple known issues or limitations, They are:
+ Primary key column is removed using the `remove-column` transformation rule.
+ Table structure has a mismatch between your source and target databases, that is primary key column existing on source but not on the target or may have been removed.
+ Known limitations or missing prerequisites:
  + Supplemental logging not properly enabled on Oracle tables.
  + Oracle table created with long object names (over 30 bytes) , Hence object names could be table or column name.
  + Replication from Oracle application containers PDB.

# Troubleshooting latency issues in AWS Database Migration Service
<a name="CHAP_Troubleshooting_Latency"></a>

This section provides an overview of the common causes for AWS DMS task latency during the ongoing replication phase (CDC). AWS DMS replicates data asynchronously. Latency is the delay between when a change was committed on the source and when the change was replicated to the target. Latency can be caused due to misconfiguration of replication components, such as the following: 
+ Source endpoint or data source
+ Target endpoint or data source
+ Replication instances
+ The network between these components

We recommend that you use a test migration as a proof of concept to gather information about your replication. You can then use this information for tuning your replication configuration to minimize latency. For information about running a proof of concept migration, see [Running a proof of concept](CHAP_BestPractices.md#CHAP_BestPractices.RunPOC).

**Topics**
+ [Types of CDC latency](#CHAP_Troubleshooting_Latency_Types)
+ [Common causes of CDC latency](#CHAP_Troubleshooting_Latency_Causes)
+ [Troubleshooting latency issues](CHAP_Troubleshooting_Latency_Troubleshooting.md)

## Types of CDC latency
<a name="CHAP_Troubleshooting_Latency_Types"></a>

This section contains types of replication latency that may occur during CDC.

### Source latency
<a name="CHAP_Troubleshooting_Latency_Types_Source"></a>

The delay, in seconds, between the commit time of the last event captured from the source endpoint, and the current system timestamp of the replication instance. You can monitor the latency between the data source and your replication instance using the `CDCLatencySource` CloudWatch metric. A high `CDCLatencySource` metric indicates that the process of capturing changes from the source is delayed. For example, if your application commits an insert to the source at 10:00, and AWS DMS consumes the change at 10:02, the `CDCLatencySource` metric is 120 seconds. 

For information about CloudWatch metrics for AWS DMS, see [Replication task metrics](CHAP_Monitoring.md#CHAP_Monitoring.Metrics.Task).

### Target latency
<a name="CHAP_Troubleshooting_Latency_Types_Target"></a>

The delay, in seconds, between the commit time on the source of the first event waiting to commit to the target, and the current timestamp of the DMS replication instance. You can monitor the latency between commits on the data source and your data target using the `CDCLatencyTarget` CloudWatch metric. This means that `CDCLatencyTarget` includes any delays in reading from the source. As a result, `CDCLatencyTarget` is always greater than or equal to `CDCLatencySource`.

For example, if your application commits an insert to the source at 10:00, and AWS DMS consumes it at 10:02 and writes it to the target at 10:05, the `CDCLatencyTarget` metric is 300 seconds.

## Common causes of CDC latency
<a name="CHAP_Troubleshooting_Latency_Causes"></a>

This section contains causes of latency that your replication may experience during CDC.

**Topics**
+ [Endpoint resources](#CHAP_Troubleshooting_Latency_Causes_Endpoint)
+ [Replication instance resources](#CHAP_Troubleshooting_Latency_Causes_Replication_Instance)
+ [Network speed and bandwidth](#CHAP_Troubleshooting_Latency_Causes_Replication_Network)
+ [DMS configuration](#CHAP_Troubleshooting_Latency_Causes_Replication_DMS_Config)
+ [Replication scenarios](#CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios)

### Endpoint resources
<a name="CHAP_Troubleshooting_Latency_Causes_Endpoint"></a>

The following factors significantly affect replication performance and latency:
+ Source and target database configurations
+ Instance size
+ Under-provisioned or misconfigured source or target data stores

To identify causes for latency caused by endpoint issues for AWS-hosted sources and targets, monitor the following CloudWatch metrics:
+ `FreeMemory`
+ `CPUUtilization`
+ Throughput and I/O metrics, such as `WriteIOPS`, `WriteThroughput`, or `ReadLatency`
+ Transaction volume metrics such as `CDCIncomingChanges`.

For information about monitoring CloudWatch metrics, see [AWS Database Migration Service metrics](CHAP_Monitoring.md#CHAP_Monitoring.Metrics).

### Replication instance resources
<a name="CHAP_Troubleshooting_Latency_Causes_Replication_Instance"></a>

Replication instance resources are critical for replication, and you should make sure that there are no resource bottlenecks, as they can lead to both source and target latency.

To identify resource bottlenecks for your replication instance, verify the following:
+ Critical CloudWatch metrics such as CPU, Memory, I/O per second, and storage are not experiencing spikes or consistenly high values.
+ Your replication instance is sized appropriately for your workload. For information about determining the correct size of a replication instance, see [Selecting the best size for a replication instance](CHAP_BestPractices.SizingReplicationInstance.md).

### Network speed and bandwidth
<a name="CHAP_Troubleshooting_Latency_Causes_Replication_Network"></a>

Network bandwith is a factor that affects data transmission. To analyze the network performance of your replication, do one of the following:
+ Check the `ReadThroughput` and `WriteThroughput` metrics at the instance level. For information about monitoring CloudWatch metrics, see [AWS Database Migration Service metrics](CHAP_Monitoring.md#CHAP_Monitoring.Metrics).
+ Use the AWS DMS Diagnostic Support AMI. If the Diagnostic Support AMI is not available in your region, you can download it from any supported region and copy it to your region to perform your network analysis. For information about the Diagnostic Support AMI, see [Working with the AWS DMS diagnostic support AMI](CHAP_SupportAmi.md).

CDC in AWS DMS is single-threaded to ensure data consistency. As a result, you can determine the data volume your network can support by calculating your single-threaded data transfer rate. For example, if your task connects to its source using a 100 Mbps (megabits per second) network, your replication has a theoretical maximum bandwidth allocation of 12.5 MBps (megabytes per second). This is equal to 45 gigabits per hour. If the rate of transaction log generation on the source is greater than 45 gigabits per hour, this means that the task has CDC latency. For a 100 MBps network, these rates are theoretical maximums; other factors such as network traffic and resource overhead on the source and target reduce the actual available bandwidth.

### DMS configuration
<a name="CHAP_Troubleshooting_Latency_Causes_Replication_DMS_Config"></a>

This section contains recommended replication configurations that can help reduce latency.
+ **Endpoint settings**: Your source and target endpoint settings can cause your replication instance to suffer poor performance. Endpoint settings that turn on resource-intensive features will impact performance. For example, for an Oracle endpoint, disabling LogMiner and using Binary Reader improves performance, since LogMiner is resource-intensive. The following endpoing setting improves performance for an Oracle endpoint:

  ```
  useLogminerReader=N;useBfile=Y
  ```

  For more information about endpoint settings, see the documentation for your source and target endpoint engine in the [Working with AWS DMS endpoints](CHAP_Endpoints.md) topic.
+ **Task settings**: Some task settings for your particular replication scenario can cause your replication instance to suffer poor performance. For example, AWS DMS uses transactional apply mode by default (`BatchApplyEnabled=false`) for CDC for all endpoints except for Amazon Redshift. However, for sources with a large number of changes, setting `BatchApplyEnabled` to `true` may improve performance.

  For more information about task settings, see [Specifying task settings for AWS Database Migration Service tasks](CHAP_Tasks.CustomizingTasks.TaskSettings.md).
+ **Start Position of a CDC only task**: Starting a CDC-only task from a position or timestamp in the past will start the task with increased CDC source latency. Depending on the volume of changes on the source, task latency will take time to subside. 
+ **LOB settings**: Large Object data types can hinder replication performance due to the way AWS DMS replicates large binary data. For more information, see the following topics:
  + [Setting LOB support for source databases in an AWS DMS task](CHAP_Tasks.LOBSupport.md)
  + [Migrating large binary objects (LOBs)](CHAP_BestPractices.md#CHAP_BestPractices.LOBS).

### Replication scenarios
<a name="CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios"></a>

This section describes specific replication scenarios and how they may affect latency.

**Topics**
+ [Stopping a task for an extended period of time](#CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios_Stoptask)
+ [Cached changes](#CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios_Cachedchanges)
+ [Cross-region replication](#CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios_Crossregion)

#### Stopping a task for an extended period of time
<a name="CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios_Stoptask"></a>

When you stop a task, AWS DMS saves the position of the last transaction log that was read from the source. When you resume the task, DMS tries to continue reading from the same transaction log position. Resuming a task after several hours or days causes CDC source latency to increase until DMS finishes consuming the transaction backlog.

#### Cached changes
<a name="CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios_Cachedchanges"></a>

**Cached changes** are changes that your application writes to the data source while AWS DMS runs the full-load replication phase. DMS doesn't apply these changes until the full-load phase completes and the CDC phase starts. For a source with large number of transactions, cached changes take longer to apply, so source latency increases when the CDC phase starts. We recommend that you run the full-load phase when transaction volumes are low to minimize the number of cached changes.

#### Cross-region replication
<a name="CHAP_Troubleshooting_Latency_Causes_Replication_Scenarios_Crossregion"></a>

Locating your DMS endpoints or your replication instance in different AWS regions increases network latency. This increases replication latency. For best performance, locate your source endpoint, target endpoint, and replication instance in the same AWS region.

# Troubleshooting latency issues
<a name="CHAP_Troubleshooting_Latency_Troubleshooting"></a>

This section contains troubleshooting steps for replication latency.

To troubleshoot latency, do the following:
+ First, determine the type and amount of latency for the task. Check the task's Table Statistics section from the DMS console or CLI. If the counters are changing, then data transmission is in progress. Check the `CDCLatencySource` and`CDCLatencyTarget` metrics together to determine if there's a bottleneck during CDC.
+ If high `CDCLatencySource` or `CDCLatencyTarget` metrics indicate a bottleneck in your replication, check the following:
  + If `CDCLatencySource` is high and `CDCLatencyTarget` is equal to `CDCLatencySource`, this indicates that there is a bottleneck in your source endpoint, and AWS DMS is writing data to the target smoothly. See [Troubleshooting source latency issues](CHAP_Troubleshooting_Latency_Source.md) following.
  + If `CDCLatencySource` is low and `CDCLatencyTarget` is high, this indicates that there is a bottleneck in your target endpoint, and AWS DMS is reading data from the source smoothly. See [Troubleshooting target latency issues](CHAP_Troubleshooting_Latency_Target.md) following.
  + If `CDCLatencySource` is high and `CDCLatencyTarget` is significantly higher than `CDCLatencySource`, this indicates bottlenecks on both source reads and target writes. Investigate source latency first, and then investigate target latency.

For information about monitoring DMS task metrics, see [Monitoring AWS DMS tasks](CHAP_Monitoring.md). 

# Troubleshooting source latency issues
<a name="CHAP_Troubleshooting_Latency_Source"></a>

The following topics describe replication scenarios specific to source endpoint types.

**Topics**
+ [Oracle Endpoint Troubleshooting](CHAP_Troubleshooting_Latency_Source_Oracle.md)
+ [MySQL Endpoint Troubleshooting](CHAP_Troubleshooting_Latency_Source_MySQL.md)
+ [PostgreSQL Endpoint Troubleshooting](CHAP_Troubleshooting_Latency_Source_PostgreSQL.md)
+ [SQL Server Endpoint Troubleshooting](CHAP_Troubleshooting_Latency_Source_SQLServer.md)

# Oracle Endpoint Troubleshooting
<a name="CHAP_Troubleshooting_Latency_Source_Oracle"></a>

This section contains replication scenarios specific to Oracle.

## Source reading paused
<a name="CHAP_Troubleshooting_Latency_Source_Oracle_Sourcereadingpaused"></a>

AWS DMS pauses reading from an Oracle source in the following scenarios. This behavior is by design. You can investigate the causes for this using the task log. Look for messages similar to the following in the task log. For information about working with the task log, see [Viewing and managing AWS DMS task logs](CHAP_Monitoring.md#CHAP_Monitoring.ManagingLogs).
+ **SORTER message**: This indicates that DMS is caching transactions on the replication instance. For more information, see [SORTER message in task log](CHAP_Troubleshooting_Latency_Target.md#CHAP_Troubleshooting_Latency_Target_Sorter) following.
+ **Debug task logs**: If DMS interrupts the read process, your task repeatedly writes the following message to the debug task logs, without a change to the context field or timestamp:
  + **Binary reader**: 

    ```
    [SOURCE_CAPTURE  ]T:  Produce CTI event: 
    context '00000020.f23ec6e5.00000002.000a.00.0000:190805.3477731.16' 
    xid [00000000001e0018] timestamp '2021-07-19 06:57:55' 
    thread 2  (oradcdc_oralog.c:817)
    ```
  + **Logminer**: 

    ```
    [SOURCE_CAPTURE  ]T:  Produce INSERT event: 
    object id 1309826 context '000000000F2CECAA010000010005A8F500000275016C0000000000000F2CEC58' 
    xid [000014e06411d996] timestamp '2021-08-12 09:20:32' thread 1  (oracdc_reader.c:2269)
    ```
+ AWS DMS logs the following message for every new redo or archived log operation.

  ```
  00007298: 2021-08-13T22:00:34 [SOURCE_CAPTURE ]I: Start processing archived Redo log sequence 14850 thread 2 name XXXXX/XXXXX/ARCHIVELOG/2021_08_14/thread_2_seq_14850.22977.1080547209 (oradcdc_redo.c:754)
  ```

  If the source has new redo or archived log operations, and AWS DMS is not writing these messages to the log, this means that the task is not processing events.

## High redo generation
<a name="CHAP_Troubleshooting_Latency_Source_Oracle_Highredo"></a>

If your task is processing redo or archived logs, but the source latency remains high, try to identify the redo log generation rate and generation patterns. If you have a high level of redo log generation, this increases source latency, because your task reads all of the redo and archive logs in order to fetch changes related to the replicated tables. 

To determine the redo generation rate, use the following queries.
+ Per-day redo generation rate:

  ```
  select trunc(COMPLETION_TIME,'DD') Day, thread#, 
  round(sum(BLOCKS*BLOCK_SIZE)/1024/1024/1024) GB,
  count(*) Archives_Generated from v$archived_log 
  where completion_time > sysdate- 1
  group by trunc(COMPLETION_TIME,'DD'),thread# order by 1;
  ```
+ Per-hour redo generation rate:

  ```
  Alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
  select trunc(COMPLETION_TIME,'HH') Hour,thread# , 
  round(sum(BLOCKS*BLOCK_SIZE)/1024/1024) "REDO PER HOUR (MB)",
  count(*) Archives from v$archived_log 
  where completion_time > sysdate- 1
  group by trunc(COMPLETION_TIME,'HH'),thread#  order by 1 ;
  ```

To troubleshoot latency in this scenario, check the following:
+ Check the network bandwidth and single-thread performance of your replication to ensure that your underlying network can support the source redo generation rate. For information about how network bandwidth can affect replication performance, see [Network speed and bandwidth](CHAP_Troubleshooting_Latency.md#CHAP_Troubleshooting_Latency_Causes_Replication_Network) prior.
+ Check if you set up supplemental logging correctly. Avoid extra logging on the source, such as enabling logging on all columns of a table. For information about setting up supplemental logging, see [Setting up supplemental logging](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Self-Managed.Configuration.SupplementalLogging). 
+ Verify that you are using the correct API to read the redo or archved logs. You can use either Oracle LogMiner or AWS DMS Binary Reader. While LogMiner reads the online redo logs and archived redo log files, Binary Reader reads and parses the raw redo log files directly. As a result, Binary Reader is more performant. We recommend that you use Binary Reader if your redo log generation is more than 10 GB/ hour. For more information, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](CHAP_Source.Oracle.md#CHAP_Source.Oracle.CDC).
+ Check if you set `ArchivedLogsOnly` to `Y`. If this endpoint setting is set, AWS DMS reads from the archived redo logs. This increases source latency, because AWS DMS waits for the online redo log to be archived before reading. For more information, see [ArchivedLogsOnly](https://docs.aws.amazon.com/dms/latest/APIReference/API_OracleSettings.html#DMS-Type-OracleSettings-ArchivedLogsOnly).
+ If your Oracle source uses Automatic Storage Management (ASM), see [Storing REDO on Oracle ASM when using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.REDOonASM) for information about how to properly configure your data store. You may also be able to optimize reading performance further by using the `asmUsePLSQLArray` extra connection attrribute (ECA). For information about using `asmUsePLSQLArray`, see [Endpoint settings when using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.ConnectionAttrib).

# MySQL Endpoint Troubleshooting
<a name="CHAP_Troubleshooting_Latency_Source_MySQL"></a>

This section contains replication scenarios specific to MySQL. AWS DMS scans the MySQL binary log periodically to replicate changes. This process can increase latency in the following scenarios:

**Topics**
+ [Long-running transaction on source](#CHAP_Troubleshooting_Latency_Source_MySQL_Longrunning)
+ [High workload on source](#CHAP_Troubleshooting_Latency_Source_MySQL_Highworkload)
+ [Binary log contention](#CHAP_Troubleshooting_Latency_Source_MySQL_Binarylog)

## Long-running transaction on source
<a name="CHAP_Troubleshooting_Latency_Source_MySQL_Longrunning"></a>

Since MySQL only writes committed transactions to the binary log, long-running transactions cause latency spikes proportional to the query run time.

To identify long-running transactions, use the following query, or use the slow query log:

```
SHOW FULL PROCESSLIST;
```

For information about using the slow query log, see [The Slow Query Log](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) in the [MySQL documentation](https://dev.mysql.com/doc/).

To avoid latency spikes from long-running transactions, restructure your source transactions to either reduce the query run time or increase your commit frequency.

## High workload on source
<a name="CHAP_Troubleshooting_Latency_Source_MySQL_Highworkload"></a>

Because DMS CDC is single-threaded, a large number of transactions can increase source latency. To identify if source latency is due to a heavy workload, compare the number and size of the binary logs generated during the latency period to the logs generated before the latency. To check the binary logs, and DMS CDC thread status, use the following queries:

```
SHOW BINARY LOGS;
SHOW PROCESSLIST;
```

For more information about CDC binary log dump thread states, see [ Replication Source Thread States ](https://dev.mysql.com/doc/refman/8.0/en/source-thread-states.html).

You can determine the latency by comparing the latest binary log position generated on the source with the event DMS is currently processing. To identify the latest binary log on the source, do the following:
+ Enable debug logs on the SOURCE\$1CAPTURE component.
+ Retrieve the DMS processing binary log and position details from the the task debug logs.
+ Use the following query to identify the latest binary log on the source:

  ```
  SHOW MASTER STATUS;
  ```

To further optimize performance, tune the `EventsPollInterval`. By default, DMS polls the binary log every 5 seconds, but you may improve performance by reducing this value. For more information about the `EventsPollInterval` setting, see [Endpoint settings when using MySQL as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.ConnectionAttrib).

## Binary log contention
<a name="CHAP_Troubleshooting_Latency_Source_MySQL_Binarylog"></a>

When migrating multiple tables with a large amount of data, we recommend splitting tables into separate tasks for MySQL 5.7.2 or later. In MySQL versions 5.7.2 and later, the master dump thread creates fewer lock contentions and improves throughput. As a result, the dump thread no longer locks the binary log whenever it reads an event. This means that multiple dump threads can read the binary log file concurrently. This also means that dump threads can read the binary log while clients write to it. For more information about dump threads, see [Replication Threads](https://dev.mysql.com/doc/refman/8.0/en/replication-threads.html) and the [MySQL 5.7.2 release notes](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-2.html).

To improve replication performance for MySQL sources versions prior to 5.7.2, try consolidating tasks with CDC components.

# PostgreSQL Endpoint Troubleshooting
<a name="CHAP_Troubleshooting_Latency_Source_PostgreSQL"></a>

This section contains replication scenarios specific to PostgreSQL.

**Topics**
+ [Long-running transaction on source](#CHAP_Troubleshooting_Latency_Source_PostgreSQL_Longrunning)
+ [High workload on source](#CHAP_Troubleshooting_Latency_Source_PostgreSQL_Highworkload)
+ [High network throughput](#CHAP_Troubleshooting_Latency_Source_PostgreSQL_Highnetwork)
+ [Spill files in Aurora PostgreSQL](#CHAP_Troubleshooting_Latency_Source_PostgreSQL_Spill)

## Long-running transaction on source
<a name="CHAP_Troubleshooting_Latency_Source_PostgreSQL_Longrunning"></a>

When there are long-running transactions in the source database, such as a few thousand inserts in a single transaction, the DMS CDC event and transaction counters do not increase until the transaction is complete. This delay can cause latency issues that you can measure using the `CDCLatencyTarget` metric.

To review long-running transactions, do one of the following:
+ Use the `pg_replication_slots` view. If the `restart_lsn` value isn't updating, it is likely that PostgreSQL is unable to release Write Ahead Logs (WALs) due to long-running active transactions. For information about the `pg_replication_slots` view, see [pg\$1replication\$1slots](https://www.postgresql.org/docs/15/view-pg-replication-slots.html) in the [PostgreSQL 15.4 Documentation](https://www.postgresql.org/docs/15/).
+ Use the following query to return a list of all active queries in the database, along with related information: 

  ```
  SELECT pid, age(clock_timestamp(), query_start), usename, query 
  FROM pg_stat_activity WHERE query != '<IDLE>' 
  AND query NOT ILIKE '%pg_stat_activity%'
  ORDER BY query_start desc;
  ```

  In the query results, the `age` field shows the active duration of each query, which you can use to identify long-running queries.

## High workload on source
<a name="CHAP_Troubleshooting_Latency_Source_PostgreSQL_Highworkload"></a>

If your source PostgreSQL has a high workload, check the following to reduce latency:
+ You may experience high latency when using the `test_decoding` plugin while migrating a subset of tables from the source database with a high transactions per second (TPS) value. This is because the `test_decoding` plugin sends all database changes to the replication instance which DMS then filters, based on the task’s table mapping. Events for tables that aren’t part of the task’s table mapping can increase source latency.
+ Check TPS throughput using one of the following methods.
  + For Aurora PostgreSQL sources, use the `CommitThroughput` CloudWatch metric.
  + For PostgreSQL running on Amazon RDS or on-premises, use the following query using a PSQL client version 11 or higher (Press **enter** during the query to advance the results):

    ```
    SELECT SUM(xact_commit)::numeric as temp_num_tx_ini FROM pg_stat_database; \gset
    select pg_sleep(60);
    SELECT SUM(xact_commit)::numeric as temp_num_tx_final FROM pg_stat_database; \gset
    select (:temp_num_tx_final - :temp_num_tx_ini)/ 60.0 as "Transactions Per Second";
    ```
+ To reduce latency when using the `test_decoding` plugin, consider using the `pglogical` plugin instead. Unlike the `test_decoding` plugin, the `pglogical` plugin filters write ahead log (WAL) changes at the source, and only sends relevant changes to the replication instance. For information about using the `pglogical` plugin with AWS DMS, see [Configuring the pglogical plugin](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.Security.Pglogical).

## High network throughput
<a name="CHAP_Troubleshooting_Latency_Source_PostgreSQL_Highnetwork"></a>

Your replication may have high network bandwidth use when using the `test_decoding` plugin, especially during high-volume transactions. This is because the `test_decoding` plugin processes changes, and converts them into a human-readable format that is larger than the original binary format.

To improve performance, consider using the `pglogical` plugin instead, which is a binary plugin. Unlike the `test_decoding` plugin, the `pglogical` plugin generates binary format output, resulting in compressed write ahead log (WAL) stream changes.

## Spill files in Aurora PostgreSQL
<a name="CHAP_Troubleshooting_Latency_Source_PostgreSQL_Spill"></a>

In PostgreSQL version 13 and higher, the `logical_decoding_work_mem` parameter determines the memory allocation for decoding and streaming. For more information about the `logical_decoding_work_mem` parameter, see [ Resource Consumption in PostgreSQL](https://www.postgresql.org/docs/13/runtime-config-resource.html#GUC-LOGICAL-DECODING-WORK-MEM) in the [ PostgreSQL 13.13 Documentation](https://www.postgresql.org/docs/13/).

Logical replication accumulates changes for all transactions in memory until those transactions commit. If the amount of data stored across all transactions exceeds the amount specified by the database parameter `logical_decoding_work_mem`, then DMS spills the transaction data to disk to release memory for new decoding data.

Long running transactions, or many subtransactions, may result in DMS consuming increased logical decoding memory. This increased memory use results in DMS creating spill files on disk, which causes high source latency during replication.

To reduce the impact of an increase in the source workload, do the following:
+ Reduce long-running transactions.
+ Reduce the number of sub-transactions.
+ Avoid performing operations that generate a large burst of log records, such as deleting or updating an entire table in a single transaction. Perform operations in smaller batches instead.

You can use the following CloudWatch metrics to monitor the workload on the source:
+ `TransactionLogsDiskUsage`: The number of bytes currently occupied by the logical WAL. This value increases monotonically if logical replication slots are unable to keep up with the pace of new writes, or if any long running transactions prevent garbage collection of older files.
+ `ReplicationSlotDiskUsage`: The amount of disk space the logical replication slots currently use.

You can reduce source latency by tuning the `logical_decoding_work_mem` parameter. The default value for this parameter is 64 MB. This parameter limits the amount of memory used by each logical streaming replication connection. We recommend setting the `logical_decoding_work_mem` value significantly higher than the `work_mem` value to reduce the amount of decoded changes that DMS writes to disk.

We recommend that you periodically check for spill files, particularly during periods of heavy migration activity or latency. If DMS is creating a significant number of spill files, this means that logical decoding isn't operating efficiently, which can increase latency. To mitigate this, increase the `logical_decoding_work_mem` parameter value. 

You can check the current transaction overflow with the `aurora_stat_file` function. For more information, see [ Adjusting working memory for logical decoding](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.Tuning-memory-parameters.html#AuroraPostgreSQL.BestPractices.Tuning-memory-parameters.logical-decoding-work-mem) in the *Amazon Relational Database Service Developer Guide*.



# SQL Server Endpoint Troubleshooting
<a name="CHAP_Troubleshooting_Latency_Source_SQLServer"></a>

This section contains replication scenarios specific to SQL Server. To determine what changes to replicate from SQL server AWS DMS reads the transaction logs, and runs periodic scans on the source database. Replication latency usually results from SQL Server throttling these scans because of resource constraints. It can also result from a significant increase in the number of events written to the transaction log in a short time. 

**Topics**
+ [Index rebuilds](#CHAP_Troubleshooting_Latency_Source_SQLServer_Indexrebuilds)
+ [Large transactions](#CHAP_Troubleshooting_Latency_Source_SQLServer_Largetransactions)
+ [Misconfigured MS-CDC polling interval for Amazon RDS SQL Server](#CHAP_Troubleshooting_Latency_Source_SQLServer_MisconfiguredCDC)
+ [Multiple CDC tasks replicating from the same source database](#CHAP_Troubleshooting_Latency_Source_SQLServer_MultipleCDC)
+ [Transaction log backup processing for RDS for SQL Server](#CHAP_Troubleshooting_Latency_Source_SQLServer_backup)

## Index rebuilds
<a name="CHAP_Troubleshooting_Latency_Source_SQLServer_Indexrebuilds"></a>

When SQL Server rebuilds a large index, it uses a single transaction. This generates a lot of events, and can use up a large amount of log space if SQL Server rebuilds several indexes at once. When this happens, you can expect brief replication spikes. If your SQL Server source has sustained log spikes, check the following:
+ First, check the time period of the latency spikes using either the `CDCLatencySource` and `CDCLatencySource` CloudWatch metrics, or by checking Throughput Monitoring messages in the task logs. For information about CloudWatch metrics for AWS DMS, see [Replication task metrics](CHAP_Monitoring.md#CHAP_Monitoring.Metrics.Task). 
+ Check if the size of the active transaction logs or log backups increased during the latency spike. Also check if a maintenance job or a rebuild ran during that time. For information about checking transaction log size, see [ Monitor log space use](https://learn.microsoft.com/en-us/sql/relational-databases/logs/manage-the-size-of-the-transaction-log-file?view=sql-server-ver16#MonitorSpaceUse) in the [SQL Server technical documentation](https://learn.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver16).
+ Verify that your maintenance plan follows SQL server best practices. For information about SQL server maintenance best practices, see [ Index maintenance strategy](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/reorganize-and-rebuild-indexes?view=sql-server-ver16#index-maintenance-strategy) in the [SQL Server technical documentation](https://learn.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver16).

To fix latency issues during index rebuilds, try the following:
+ Use the `BULK_LOGGED` recovery model for offline rebuilds to reduce the events a task has to process.
+ If possible, stop the task during index rebuilds. Or, try to schedule index rebuilds during non-peak hours to mitigate the impact of a latency spike.
+ Try to identify resource bottlenecks that are slowing DMS reads, such as disk latency or I/O throughput, and address them.

## Large transactions
<a name="CHAP_Troubleshooting_Latency_Source_SQLServer_Largetransactions"></a>

Transactions with a lot of events, or long-running transactions, cause the transaction log to grow. This causes DMS reads to take longer, resulting in latency. This is similar to the effect index rebuilds have on replication performance.

You may have difficulty identifying this issue if you're not familiar with the typical workload on the source database. To troubleshoot this issue, do the following:
+ First, identify the time that latency spiked using either the `ReadThroughput` and `WriteThroughput` CloudWatch metrics, or by checking Throughput Monitoring messages in the task logs.
+ Check if there are any long-running queries on the source database during the latency spike. For information about long-running queries, see [ Troubleshoot slow-running queries in SQL Server ](https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/performance/troubleshoot-slow-running-queries) in the [SQL Server technical documentation](https://learn.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver16).
+ Check if the size of the active transaction logs or the log backups has increased. For more information, see [ Monitor log space use ](https://learn.microsoft.com/en-us/sql/relational-databases/logs/manage-the-size-of-the-transaction-log-file?view=sql-server-ver16#MonitorSpaceUse) in the [SQL Server technical documentation](https://learn.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver16).

To fix this issue, do one of the following:
+ The best fix is to restructure your transactions on the application side so that they complete quickly. 
+ If you can't restructure your transactions, a short-term workaround is to check for resource bottlenecks such as disk waits or CPU contention. If you find bottlenecks in your source database, you can reduce latency by increasing disk, CPU, and memory resources for source database. This reduces contention for system resources, allowing DMS queries to complete faster.

## Misconfigured MS-CDC polling interval for Amazon RDS SQL Server
<a name="CHAP_Troubleshooting_Latency_Source_SQLServer_MisconfiguredCDC"></a>

A misconfigured polling interval setting on Amazon RDS instances can cause the transaction log to grow. This is because replication prevents log truncation. While tasks that are running might continue replicating with minimal latency, stopping and resuming tasks, or starting CDC-only tasks, can cause task failures. These are due to timeouts while scanning the large transaction log.

To troubleshoot a misconfigured polling interval, do the following:
+ Check if the active transaction log size is increasing, and if log usage is close to 100 percent. For more information, see [ Monitor log space use ](https://learn.microsoft.com/en-us/sql/relational-databases/logs/manage-the-size-of-the-transaction-log-file?view=sql-server-ver16#MonitorSpaceUse) in the [SQL Server technical documentation](https://learn.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver16).
+ Check if log truncation is delayed with a `log_reuse_wait_desc value` of `REPLICATION`. For more information, see [ The Transaction Log (SQL Server) ](https://learn.microsoft.com/en-us/sql/relational-databases/logs/the-transaction-log-sql-server?view=sql-server-ver16#FactorsThatDelayTruncation) in the [SQL Server technical documentation](https://learn.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver16).

If you find issues with any of the items in the previous list, tune the MS-CDC polling interval. For information about tuning the polling interval, see [Recommended settings when using RDS for SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.CDC.md#CHAP_Source.SQLServer.Configuration.Settings). 

## Multiple CDC tasks replicating from the same source database
<a name="CHAP_Troubleshooting_Latency_Source_SQLServer_MultipleCDC"></a>

During the full load phase, we recommend splitting tables across tasks to improve performance, to separate dependent tables logically, and to mitigate the impact of a task failure. However, during the CDC phase, we recommend consolidating tasks to minimize DMS scans. During the CDC phase, each DMS task scans the transaction logs for new events several times a minute. Since each task runs independently, every task scans each transaction log individually. This increases disk and CPU usage on the source SQL Server database. As a result, a large number of tasks running in parallel can cause SQL Server to throttle DMS reads, leading to increased latency.

You may have difficulty identifying this issue if muliple tasks start gradually. The most common symptom of this issue is most task scans starting to take longer. This leads to higher latency for these scans. SQL Server prioritizes a few of the task scans, so a few of the tasks show normal latency. To troubleshoot this issue, check the `CDCLatencySource` metric for all of your tasks. If some of the tasks have an increasing `CDCLatencySource`, while a few tasks have a low `CDCLatencySource`, it is likely that SQL Server is throttling your DMS reads for some of your tasks.

If SQL Server is throttling your task reads during CDC, consolidate your tasks to minimize the number of DMS scans. The maximum number of tasks that can connect to your source database without creating contention depends on factors such as the source database capacity, the rate of transaction log growth, or the number of tables. To determine the ideal number of tasks for your replication scenario, test replication in a test environment similar to your production environment.

## Transaction log backup processing for RDS for SQL Server
<a name="CHAP_Troubleshooting_Latency_Source_SQLServer_backup"></a>

AWS DMS 3.5.3 and above support replicating from RDS for SQL Server log backups. Replicating events from the backup logs on RDS instances is slower than replicating events from the active transaction log. This is because DMS requests access to backups serially to ensure that it maintains the transaction sequence, and to minimize the risk of the Amazon RDS instance storage filling up. Moreover, at the Amazon RDS end, the time taken to make the backups available to DMS varies depending on the size of the log backup, and the load on the RDS for SQL Server instance.

Because of these constraints, we recommend that you set the ECA `ActivateSafeguard` to `true`. This ensures that transactions are not backed up while the DMS task is reading from the active transaction log. This setting also prevents Amazon RDS archiving transactions in the active log when DMS is reading transactions from the backup, thereby eliminating the possibility that DMS cannot catch up to the active log. Note that this may cause the active log size to grow while the task is catching up. Ensure that your instance has enough storage to keep the instance from running out of space.

For a CDC-only task replicating from RDS for SQL Server sources, use the use of native CDC start position over native CDC start time if possible. This is because DMS relies on system tables to identify the starting point for the native start position, rather than scanning individual log backups when you specify a native start time.

# Troubleshooting target latency issues
<a name="CHAP_Troubleshooting_Latency_Target"></a>

This section contains scenarios that can contribute to target latency.

**Topics**
+ [Indexing issues](#CHAP_Troubleshooting_Latency_Target_Indexing)
+ [SORTER message in task log](#CHAP_Troubleshooting_Latency_Target_Sorter)
+ [Database locking](#CHAP_Troubleshooting_Latency_Target_Locking)
+ [Slow LOB lookups](#CHAP_Troubleshooting_Latency_Target_LOB)
+ [Multi-AZ, audit logging and backups](#CHAP_Troubleshooting_Latency_Target_MultiAZ)

## Indexing issues
<a name="CHAP_Troubleshooting_Latency_Target_Indexing"></a>

During the CDC phase, AWS DMS replicates changes on the source by executing DML statements (insert, update, and delete) on the target. For heterogenous migrations using DMS, differences in index optimizations on the source and target can cause writes to the target to take longer. This results in target latency and performance issues.

To troubleshoot indexing issues, do the following. The procedures for these steps vary for different database engines. 
+ Monitor the query time for your target database. Comparing the query execution time on the target and source can indicate which indexes need optimization.
+ Enable logging for slow-running queries.

To fix indexing issues for long-running replications, do the following:
+ Tune the indexes on your source and target databases so that the query execution time is similar on the source and the target.
+ Compare the secondary indexes used in DML queries for the source and the target. Make sure that DML performance on the target is comparable to or better than the source DML performance.

Note that the procedure for optimizing indexes is specific to your database engine. There is no DMS feature for tuning source and target indexes.

## SORTER message in task log
<a name="CHAP_Troubleshooting_Latency_Target_Sorter"></a>

If a target endpoint can't keep up with the volume of changes that AWS DMS writes to it, the task caches the changes on the replication instance. If the cache grows larger than an internal threshold, the task stops reading further changes from the source. DMS does this to prevent the replication instance from running out of storage, or the task being stuck while reading a large volume of pending events. 

To troubleshoot this issue, check the CloudWatch logs for a message similar to either of the following:

```
[SORTER ]I: Reading from source is paused. Total disk usage exceeded the limit 90% (sorter_transaction.c:110)
[SORTER ]I: Reading from source is paused. Total storage used by swap files exceeded the limit 1048576000 bytes  (sorter_transaction.c:110)
```

If your logs contain a message similar to the first message, disable any trace logging for the task, and increase the replication instance storage. For information about increasing replication instance storage, see [Modifying a replication instance](CHAP_ReplicationInstance.Modifying.md).

If your logs contain a message similar to the second message, do the following:
+ Move tables with numerous transactions or long running DML operations to a separate task, if they don’t have any dependencies on other tables in the task.
+ Increase the `MemoryLimitTotal` and `MemoryKeepTime` settings to hold the transaction for a longer duration in memory. This won't help if the latency is sustained, but it can help keep latency down during short bursts of transactional volume. For information about these task settings, see [Change processing tuning settings](CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.md).
+ Evaluate if you can use batch apply for your transaction by setting `BatchApplyEnabled` to `true`. For information about the `BatchApplyEnabled` setting, see [Target metadata task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md).

## Database locking
<a name="CHAP_Troubleshooting_Latency_Target_Locking"></a>

If an application accesses a database that AWS DMS is using as a replication target, the application may lock a table that DMS is trying to access. This creates a lock contention. Since DMS writes changes to the target database in the order they occurred on the source, delays to writing to one table due to lock contentions create delays to writing to all tables. 

To troubleshoot this issue, query the target database to check if a lock contention is blocking DMS write transactions. If the target database is blocking DMS write transactions, do one or more of the following:
+ Restructure your queries to commit changes more frequently.
+ Modify your lock timeout settings.
+ Partition your tables to minimize lock contentions.

Note that the procedure for optimizing lock contentions is specific to your database engine. There is no DMS feature for tuning lock contentions.

## Slow LOB lookups
<a name="CHAP_Troubleshooting_Latency_Target_LOB"></a>

When AWS DMS replicates a large object (LOB) column, it performs a lookup on the source just before writing changes to the target. This lookup normally doesn't cause any latency on the target, but if the source database delays the lookup due to locking, target latency may spike. 

This issue is normally difficult to diagnose. To troubleshoot this issue, enable detailed debugging on the task logs, and compare the timestamps of the DMS LOB lookup calls. For information about enabling detailed debugging, see [Viewing and managing AWS DMS task logs](CHAP_Monitoring.md#CHAP_Monitoring.ManagingLogs).

To fix this issue, try the following:
+ Improve the SELECT query performance on the source database.
+ Tune the DMS LOB settings. For information about tuning the LOB settings, see [Migrating large binary objects (LOBs)](CHAP_BestPractices.md#CHAP_BestPractices.LOBS).

## Multi-AZ, audit logging and backups
<a name="CHAP_Troubleshooting_Latency_Target_MultiAZ"></a>

For Amazon RDS targets, target latency can increase during the following:
+ Backups
+ After enabling multiple availability zones (multi-AZ)
+ After enabling database logging, such as audit or slow query logs.

These issues are normally difficult to diagnose. To troubleshoot these issues, monitor latency for periodic spikes during Amazon RDS maintenance windows or periods of heavy database loads.

To fix these issues, try the following:
+ If possible, during short term migration, disable multi-AZ, backups, or logging.
+ Reschedule your maintenance windows for periods of low activity.

# Working with diagnostic support scripts in AWS DMS
<a name="CHAP_SupportScripts"></a>

If you encounter an issue when working with AWS DMS, your support engineer might need more information about either your source or target database. We want to make sure that AWS Support gets as much of the required information as possible in the shortest possible time. Therefore, we developed scripts to query this information for several of the major relational database engines.

If a support script is available for your database, you can download it using the link in the corresponding script topic described following. After verifying and reviewing the script (described following), you can run it according to the procedure described in the script topic. When the script run is complete, you can upload its output to your AWS Support case (again, described following).

Before running the script, you can detect any errors that might have been introduced when downloading or storing the support script. To do this, compare the checksum for the script file with a value provided by AWS. AWS uses the SHA256 algorithm for the checksum.

**To verify the support script file using a checksum**

1. Open the latest checksum file provided to verify these support scripts at [https://d2pwp9zz55emqw.cloudfront.net/sha256Check.txt](https://d2pwp9zz55emqw.cloudfront.net/sha256Check.txt). For example, the file might have content like the following.

   ```
   MYSQL  dfafd0d511477c699f96c64693ad0b1547d47e74d5c5f2f2025b790b1422e3c8
   ORACLE  6c41ebcfc99518cfa8a10cb2ce8943b153b2cc7049117183d0b5de3d551bc312
   POSTGRES  6ccd274863d14f6f3146fbdbbba43f2d8d4c6a4c25380d7b41c71883aa4f9790
   SQL_SERVER  971a6f2c46aec8d083d2b3b6549b1e9990af3a15fe4b922e319f4fdd358debe7
   ```

1. Run the SHA256 validation command for your operating system in the directory that contains the support file. For example, on the macOS operating system you can run the following command on an Oracle support script described later in this topic.

   ```
   shasum -a 256 awsdms_support_collector_oracle.sql
   ```

1. Compare the results of the command with the value shown in the latest `sha256Check.txt` file that you opened. The two values should match. If they don't, contact your support engineer about the mismatch and how you can obtain a clean support script file.

If you have a clean support script file, before running the script make sure to read and understand the SQL from both a performance and security perspective. If you aren't comfortable running any of the SQL in this script, you can comment out or remove the problem SQL. You can also consult with your support engineer about any acceptable workarounds.

Upon successful completion and unless otherwise noted, the script returns output in a readable HTML format. The script is designed to exclude from this HTML any data or security details that might compromise your business. It also makes no modifications to your database or its environment. However, if you find any information in the HTML that you are uncomfortable sharing, feel free to remove the problem information before uploading the HTML. When the HTML is acceptable, upload it using the **Attachments** in the **Case details** of your support case.

Each of the following topics describes the scripts available for a supported AWS DMS database and how to run them. Your support engineer will direct you to a specific script documented following.

**Topics**
+ [Oracle diagnostic support scripts](CHAP_SupportScripts.Oracle.md)
+ [SQL Server diagnostic support scripts](CHAP_SupportScripts.SQLServer.md)
+ [Diagnostic support scripts for MySQL-compatible databases](CHAP_SupportScripts.MySQL.md)
+ [PostgreSQL diagnostic support scripts](CHAP_SupportScripts.PostgreSQL.md)

# Oracle diagnostic support scripts
<a name="CHAP_SupportScripts.Oracle"></a>

Following, you can find the diagnostic support scripts available to analyze an on-premises or Amazon RDS for Oracle database in your AWS DMS migration configuration. These scripts work with either a source or target endpoint. The scripts are all written to run in the SQL\$1Plus command-line utility. For more information on using this utility, see [A Using SQL Command Line](https://docs.oracle.com/cd/B25329_01/doc/appdev.102/b25108/xedev_sqlplus.htm) in the Oracle documentation.

Before running the script, ensure that the user account that you use has the necessary permissions to access your Oracle database. The permissions settings shown assume a user created as follows.

```
CREATE USER script_user IDENTIFIED BY password;
```

For an on-premises database, set the minimum permissions as shown following for `script_user`.

```
GRANT CREATE SESSION TO script_user;
GRANT SELECT on V$DATABASE to script_user;
GRANT SELECT on V$VERSION to script_user;
GRANT SELECT on GV$SGA to script_user;
GRANT SELECT on GV$INSTANCE to script_user;
GRANT SELECT on GV$DATAGUARD_CONFIG to script_user;
GRANT SELECT on GV$LOG to script_user;
GRANT SELECT on DBA_TABLESPACES to script_user;
GRANT SELECT on DBA_DATA_FILES to script_user;
GRANT SELECT on DBA_SEGMENTS to script_user;
GRANT SELECT on DBA_LOBS to script_user;
GRANT SELECT on V$ARCHIVED_LOG to script_user;
GRANT SELECT on DBA_TAB_MODIFICATIONS to script_user;
GRANT SELECT on DBA_TABLES to script_user;
GRANT SELECT on DBA_TAB_PARTITIONS to script_user;
GRANT SELECT on DBA_MVIEWS to script_user;
GRANT SELECT on DBA_OBJECTS to script_user;
GRANT SELECT on DBA_TAB_COLUMNS to script_user;
GRANT SELECT on DBA_LOG_GROUPS to script_user;
GRANT SELECT on DBA_LOG_GROUP_COLUMNS to script_user;
GRANT SELECT on V$ARCHIVE_DEST to script_user;
GRANT SELECT on DBA_SYS_PRIVS to script_user;
GRANT SELECT on DBA_TAB_PRIVS to script_user;
GRANT SELECT on DBA_TYPES to script_user;
GRANT SELECT on DBA_CONSTRAINTS to script_user;
GRANT SELECT on V$TRANSACTION to script_user;
GRANT SELECT on GV$ASM_DISK_STAT to script_user;
GRANT SELECT on GV$SESSION to script_user;
GRANT SELECT on GV$SQL to script_user;
GRANT SELECT on DBA_ENCRYPTED_COLUMNS to script_user;
GRANT SELECT on DBA_PDBS to script_user;

GRANT EXECUTE on dbms_utility to script_user;
```

For an Amazon RDS database, set the minimum permissions as shown following.

```
GRANT CREATE SESSION TO script_user;
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATABASE','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$VERSION','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('GV_$SGA','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('GV_$INSTANCE','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('GV_$DATAGUARD_CONFIG','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('GV_$LOG','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TABLESPACES','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_DATA_FILES','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_SEGMENTS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_LOBS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVED_LOG','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TAB_MODIFICATIONS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TABLES','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TAB_PARTITIONS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_MVIEWS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_OBJECTS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TAB_COLUMNS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_LOG_GROUPS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_LOG_GROUP_COLUMNS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVE_DEST','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_SYS_PRIVS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TAB_PRIVS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TYPES','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_CONSTRAINTS','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('V_$TRANSACTION','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('GV_$ASM_DISK_STAT','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('GV_$SESSION','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('GV_$SQL','script_user','SELECT');
exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_ENCRYPTED_COLUMNS','script_user','SELECT');

exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_PDBS','script_user','SELECT');

exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_UTILITY','script_user','EXECUTE');
```

Following, you can find descriptions how to download, review, and run each SQL\$1Plus support script available for Oracle. You can also find how to review and upload the output to your AWS Support case.

**Topics**
+ [awsdms\$1support\$1collector\$1oracle.sql script](#CHAP_SupportScripts.Oracle.Awsdms_Support_Collector_Oracle_Script)

## awsdms\$1support\$1collector\$1oracle.sql script
<a name="CHAP_SupportScripts.Oracle.Awsdms_Support_Collector_Oracle_Script"></a>

Download the [https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_oracle.sql](https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_oracle.sql) script.

This script collects information about your Oracle database configuration. Remember to verify the checksum on the script, and if the checksum verifies, review the SQL code in the script to comment out any of the code that you are uncomfortable running. After you are satisfied with the integrity and content of the script, you can run it.

**To run the script and upload the results to your support case**

1. Run the script from your database environment using the following SQL\$1Plus command line.

   ```
   SQL> @awsdms_support_collector_oracle.sql
   ```

   The script displays a brief description and a prompt to either continue or abort the run. Press [Enter] to continue.

1. At the following prompt, enter the name of only one of the schemas that you want to migrate.

1. At the following prompt, enter the name of the user (*script\$1user*) that you have defined to connect to the database.

1. At the following prompt, enter the number of days of data you want to examine, or accept the default. The script then collects the specified data from your database.

   After the script completes, it displays the name of the output HTML file, for example `dms_support_oracle-2020-06-22-13-20-39-ORCL.html`. The script saves this file in your working directory.

1. Review this HTML file and remove any information that you are uncomfortable sharing. When the HTML is acceptable for you to share, upload the file to your AWS Support case. For more information on uploading this file, see [Working with diagnostic support scripts in AWS DMS](CHAP_SupportScripts.md).

# SQL Server diagnostic support scripts
<a name="CHAP_SupportScripts.SQLServer"></a>

Following, you can find a description of the diagnostic support scripts available to analyze an on-premises or Amazon RDS for SQL Server database in your AWS DMS migration configuration. These scripts work with either a source or target endpoint. For an on-premises database, run these scripts in the sqlcmd command-line utility. For more information on using this utility, see [sqlcmd - Use the utility](https://docs.microsoft.com/en-us/sql/ssms/scripting/sqlcmd-use-the-utility?view=sql-server-ver15) in the Microsoft documentation. 

For an Amazon RDS database, you can't connect using the sqlcmd command-line utility. Instead, run these scripts using any client tool that connects to Amazon RDS SQL Server.

Before running the script, ensure that the user account that you use has the necessary permissions to access your SQL Server database. For both an on-premises and an Amazon RDS database, you can use the same permissions you use to access your SQL Server database without the `SysAdmin` role.

**Topics**
+ [Setting up minimum permissions for an on-premises SQL Server database](#CHAP_SupportScripts.SQLServer.onprem)
+ [Setting up minimum permissions for an Amazon RDS SQL Server database](#CHAP_SupportScripts.SQLServer.rds)
+ [SQL Server Support Scripts](#CHAP_SupportScripts.SQLServer.Scripts)

## Setting up minimum permissions for an on-premises SQL Server database
<a name="CHAP_SupportScripts.SQLServer.onprem"></a>

**To set up the minimum permissions to run for an on-premises SQL Server database**

1. Create a new SQL Server account with password authentication using SQL Server Management Studio (SSMS), for example `on-prem-user`.

1. In the **User Mappings** section of SSMS, choose the **MSDB** and **MASTER** databases (which gives public permission), and assign the `DB_OWNER` role to the database where you want to run the script.

1. Open the context (right-click) menu for the new account, and choose **Security** to explicitly grant the `Connect SQL` privilege. 

1. Run the grant commands following.

   ```
   GRANT VIEW SERVER STATE TO on-prem-user;
   USE MSDB;
   GRANT SELECT ON MSDB.DBO.BACKUPSET TO on-prem-user;
   GRANT SELECT ON MSDB.DBO.BACKUPMEDIAFAMILY TO on-prem-user;
   GRANT SELECT ON MSDB.DBO.BACKUPFILE TO on-prem-user;
   ```

## Setting up minimum permissions for an Amazon RDS SQL Server database
<a name="CHAP_SupportScripts.SQLServer.rds"></a>

**To run with the minimum permissions for an Amazon RDS SQL Server database**

1. Create a new SQL Server account with password authentication using SQL Server Management Studio (SSMS), for example `rds-user`.

1. In the **User Mappings** section of SSMS, choose the **MSDB** database (which gives public permission), and assign the `DB_OWNER` role to the database where you want to run the script.

1. Open the context (right-click) menu for the new account, and choose **Security** to explicitly grant the `Connect SQL` privilege.

1. Run the grant commands following.

   ```
   GRANT VIEW SERVER STATE TO rds-user;
   USE MSDB;
   GRANT SELECT ON MSDB.DBO.BACKUPSET TO rds-user;
   GRANT SELECT ON MSDB.DBO.BACKUPMEDIAFAMILY TO rds-user;
   GRANT SELECT ON MSDB.DBO.BACKUPFILE TO rds-user;
   ```

## SQL Server Support Scripts
<a name="CHAP_SupportScripts.SQLServer.Scripts"></a>

The following topics describe how to download, review, and run each support script available for SQL Server. They also describe how to review and upload the script output to your AWS Support case.

**Topics**
+ [awsdms\$1support\$1collector\$1sql\$1server.sql script](#CHAP_SupportScripts.SQLServer.Awsdms_Support_Collector_SQLServer_Script)

### awsdms\$1support\$1collector\$1sql\$1server.sql script
<a name="CHAP_SupportScripts.SQLServer.Awsdms_Support_Collector_SQLServer_Script"></a>

Download the [https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_sql_server.sql](https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_sql_server.sql) script.

**Note**  
Run this SQL Server diagnostic support script on SQL Server 2014 and higher versions only.

This script collects information about your SQL Server database configuration. Remember to verify the checksum on the script, and if the checksum verifies, review the SQL code in the script to comment out any of the code that you are uncomfortable running. After you are satisfied with the integrity and content of the script, you can run it.

**To run the script for an on-premises SQL Server database**

1. Run the script using the following sqlcmd command line.

   ```
   sqlcmd -Uon-prem-user -Ppassword -SDMS-SQL17AG-N1 -y 0 
   -iC:\Users\admin\awsdms_support_collector_sql_server.sql -oC:\Users\admin\DMS_Support_Report_SQLServer.html -dsqlserverdb01
   ```

   The specified sqlcmd command parameters include the following:
   + `-U` – Database user name.
   + `-P` – Database user password.
   + `-S` – SQL Server database server name.
   + `-y` – Maximum width of columns output from the sqlcmd utility. A value of 0 specifies columns of unlimited width.
   + `-i` – Path of the support script to run, in this case `awsdms_support_collector_sql_server.sql`.
   + `-o` – Path of the output HTML file, with a file name that you specify, containing the collected database configuration information.
   + `-d` – SQL Server database name.

1. After the script completes, review the output HTML file and remove any information that you are uncomfortable sharing. When the HTML is acceptable for you to share, upload the file to your AWS Support case. For more information on uploading this file, see [Working with diagnostic support scripts in AWS DMS](CHAP_SupportScripts.md).

With Amazon RDS for SQL Server, you can't connect using the sqlcmd command line utility, so use the following procedure.

**To run the script for an RDS SQL Server database**

1. Run the script using any client tool that allows you to connect to RDS SQL Server as the `Master` user and save the output as an HTML file.

1. Review the output HTML file and remove any information that you are uncomfortable sharing. When the HTML is acceptable for you to share, upload the file to your AWS Support case. For more information on uploading this file, see [Working with diagnostic support scripts in AWS DMS](CHAP_SupportScripts.md).

# Diagnostic support scripts for MySQL-compatible databases
<a name="CHAP_SupportScripts.MySQL"></a>

Following, you can find the diagnostic support scripts available to analyze an on-premises or Amazon RDS for MySQL-compatible database in your AWS DMS migration configuration. These scripts work with either a source or target endpoint. The scripts are all written to run on the MySQL SQL command line. 

For information about installing the MySQL client, see [ Installing MySQL Shell](https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-install.html) in the MySQL documentation. For information about using the MySQL client, see [ Using MySQL Shell Commands](https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-configuring.html) in the MySQL documentation.

Before running a script, ensure that the user account that you use has the necessary permissions to access your MySQL-compatible database. Use the following procedure to create a user account and provide the minimum permissions needed to run this script.

**To set up a user account with the minimum permissions to run these scripts**

1. Create the user to run the scripts.

   ```
   create user 'username'@'hostname' identified by password;
   ```

1. Grant the `select` command on databases to analyze them.

   ```
   grant select on database-name.* to username;
   grant replication client on *.* to username;
   ```

1. 

   ```
   grant execute on procedure mysql.rds_show_configuration to username;
   ```

The following topics describe how to download, review, and run each support script available for a MySQL-compatible database. They also describe how to review and upload the script output to your AWS Support case.

**Topics**
+ [awsdms\$1support\$1collector\$1MySQL.sql script](#CHAP_SupportScripts.MySQL.Awsdms_Support_Collector_MySQL_Script)

## awsdms\$1support\$1collector\$1MySQL.sql script
<a name="CHAP_SupportScripts.MySQL.Awsdms_Support_Collector_MySQL_Script"></a>

Download the [https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_MySQL.sql](https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_MySQL.sql) script.

This script collects information about your MySQL-compatible database configuration. Remember to verify the checksum on the script, and if the checksum verifies, review the SQL code in the script to comment out any of the code that you are uncomfortable running. After you are satisfied with the integrity and content of the script, you can run it.

Run the script after connecting to your database environment using the command line.

**To run this script and upload the results to your support case**

1. Connect to your database using the following `mysql` command.

   ```
   mysql -p -h hostname -P port -u username database-name
   ```

1. Run the script using the following mysql `source` command.

   ```
   source awsdms_support_collector_MySQL.sql
   ```

   Review the generated report and remove any information that you are uncomfortable sharing. When the content is acceptable for you to share, upload the file to your AWS Support case. For more information on uploading this file, see [Working with diagnostic support scripts in AWS DMS](CHAP_SupportScripts.md).

**Note**  
If you already have a user account with required privileges described in [Diagnostic support scripts for MySQL-compatible databases](#CHAP_SupportScripts.MySQL), you can use the existing user account as well to run the script.
Remember to connect to your database before running the script.
The script generates its output in text format.
Keeping security best practices in mind, if you create a new user account only to execute this MySQL diagnostic support script, we recommend that you delete this user account after successful execution of the script.

# PostgreSQL diagnostic support scripts
<a name="CHAP_SupportScripts.PostgreSQL"></a>

Following, you can find the diagnostic support scripts available to analyze any PostgreSQL RDBMS (on-premises, Amazon RDS, or Aurora PostgreSQL) in your AWS DMS migration configuration. These scripts work with either a source or target endpoint. The scripts are all written to run in the psql command-line utility. 

Before running these scripts, ensure that the user account that you use has the following necessary permissions to access any PostgreSQL RDBMS:
+ PostgreSQL 10.x or higher – A user account with execute permission on the `pg_catalog.pg_ls_waldir` function.
+ PostgreSQL 9.x or earlier – A user account with default permissions.

We recommend using an existing account with the appropriate permissions to run these scripts.

If you need to create a new user account or grant permissions to an existing account to run these scripts, you can execute the following SQL commands for any PostgreSQL RDBMS based on the PostgreSQL version.

**To grant account permissions to run these scripts for a PostgreSQL databases version 10.x or higher**
+ Do one of the following:
  + For a new user account, run the following.

    ```
    CREATE USER script_user WITH PASSWORD 'password';
    GRANT EXECUTE ON FUNCTION pg_catalog.pg_ls_waldir TO script_user;
    ```
  + For an existing user account, run the following.

    ```
    GRANT EXECUTE ON FUNCTION pg_catalog.pg_ls_waldir TO script_user;
    ```

**To grant account permissions to run these scripts for a PostgreSQL 9.x or earlier database**
+ Do one of the following:
  + For a new user account, run the following with default permissions.

    ```
    CREATE USER script_user WITH PASSWORD password;
    ```
  + For an existing user account, use the existing permissions.

**Note**  
These scripts do not support certain functionality related to finding WAL size for PostgreSQL 9.x and earlier databases. For more information, work with AWS Support.

The following topics describe how to download, review, and run each support script available for PostgreSQL They also describe how to review and upload the script output to your AWS Support case.

**Topics**
+ [awsdms\$1support\$1collector\$1postgres.sql script](#CHAP_SupportScripts.PostgreSQL.Awsdms_Support_Collector_PostgreSQL_Script)

## awsdms\$1support\$1collector\$1postgres.sql script
<a name="CHAP_SupportScripts.PostgreSQL.Awsdms_Support_Collector_PostgreSQL_Script"></a>

Download the [https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_postgres.sql](https://d2pwp9zz55emqw.cloudfront.net/scripts/awsdms_support_collector_postgres.sql) script.

This script collects information about your PostgreSQL database configuration. Remember to verify the checksum on the script. If the checksum verifies, review the SQL code in the script to comment out any of the code that you are uncomfortable running. After you are satisfied with the integrity and content of the script, you can run it.

**Note**  
You can run this script with psql client version 10 or higher.

You can use the following procedures to run this script either from your database environment or from the command line. In either case, you can then upload your file to AWS Support later.

**To run this script and upload the results to your support case**

1. Do one of the following:
   + Run the script from your database environment using the following psql command line.

     ```
     dbname=# \i awsdms_support_collector_postgres.sql
     ```

     At the following prompt, enter the name of only one of the schemas that you want to migrate.

     At the following prompt, enter the name of the user (`script_user`) that you have defined to connect to the database.
   + Run the following script directly from the command line. This option avoids any prompts prior to script execution.

     ```
     psql -h database-hostname -p port -U script_user -d database-name -f awsdms_support_collector_postgres.sql
     ```

1. Review the output HTML file and remove any information that you are uncomfortable sharing. When the HTML is acceptable for you to share, upload the file to your AWS Support case. For more information on uploading this file, see [Working with diagnostic support scripts in AWS DMS](CHAP_SupportScripts.md).

# Working with the AWS DMS diagnostic support AMI
<a name="CHAP_SupportAmi"></a>

If you encounter a network-related issue when working with AWS DMS, your support engineer might need more information about your network configuration. We want to make sure that AWS Support gets as much of the required information as possible in the shortest possible time. Therefore, we developed a prebuilt Amazon EC2 AMI with diagnostic tools to test your AWS DMS networking environment.

The diagnostic tests installed on the Amazon machine image (AMI) include the following:
+ Virtual Private Cloud (VPC)
+ Network packet loss
+ Network latency
+ Maximum Transmission Unit (MTU) size

**Topics**
+ [Launch a new AWS DMS diagnostic Amazon EC2 instance](#CHAP_SupportAmi_Launch)
+ [Create an IAM role](#CHAP_SupportAmi_Iam)
+ [Run Diagnostic Tests](#CHAP_SupportAmi_Run)
+ [Next Steps](#CHAP_SupportAmi_NextSteps)
+ [AMI IDs by region](#CHAP_SupportAmi_AmiIDs)
+ [AWS Systems Patch Manager](#CHAP_PatchManager)

**Note**  
If you experience performance issues with your Oracle source, you can evaluate the read performance of your Oracle redo or archive logs to find ways to improve performance. For more information, see [Evaluating read performance of Oracle redo or archive logs](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Oracle.ReadPerformUtil).

## Launch a new AWS DMS diagnostic Amazon EC2 instance
<a name="CHAP_SupportAmi_Launch"></a>

In this section, you launch a new Amazon EC2 instance. For information about how to launch an Amazon EC2 instance, see [Get started with Amazon EC2 Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) tutorial in the [Amazon EC2 User Guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/). 

Launch an Amazon EC2 instance with the following settings:
+ For **Application and OS Images (Amazon Machine Image)**, search for the **DMS-DIAG-AMI** AMI. If you are logged on to the console, you can search for the AMI with [this query](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#Images:visibility=public-images;search=:dms-diag-ami;v=3;) For the AMI ID of the AWS Diagnostic AMI in your region, see [AMI IDs by region](#CHAP_SupportAmi_AmiIDs) following. 
+ For **Instance type**, we recommend you choose **t2.micro**.
+ For **Network Settings**, choose the same VPC that your replication instance uses.

After the instance is active, connect to the instance. For information about connecting to an Amazon EC2 Linux instance, see [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html).

## Create an IAM role
<a name="CHAP_SupportAmi_Iam"></a>

If you want to run the diagnostic tests on your replication instance using the minimum required permissions, create an IAM role that uses the following permissions policy:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "dms:DescribeEndpoints",
                "dms:DescribeTableStatistics",
                "dms:DescribeReplicationInstances",
                "dms:DescribeReplicationTasks",
                "secretsmanager:GetSecretValue"
            ],
            "Resource": "*"
        }
    ]
}
```

------

Attach the role to a new IAM user. For information about creating IAM roles, policies, and users, see the following sections in the [IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/):
+ [Getting Started with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started.html)
+ [Creating IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html)
+ [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html)

## Run Diagnostic Tests
<a name="CHAP_SupportAmi_Run"></a>

After you have created an Amazon EC2 instance and connected to it, do the following to run diagnostic tests on your replication instance.

1. Configure the AWS CLI:

   ```
   $ aws configure
   ```

   Provide the access credentials for the AWS user account you want to use to run the diagnostic tests. Provide the Region for your VPC and replication instance.

1. Display the available AWS DMS tasks in your Region. Replace the sample Region with your Region.

   ```
   $ dms-report -r us-east-1 -l
   ```

   This command displays the status of your tasks.  
![\[Diagnostic tool showing task list.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-diagami1.png)

1. Display task endpoints and settings. Replace *<DMS-Task-ARN>* with your task Amazon Resource Name (ARN).

   ```
   $ dms-report -t <DMS-Task-ARN>
   ```

   This command displays the endpoints and settings of your task.  
![\[Diagnostic tool showing endpoint list for task.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-diagami2.png)

1. Run diagnostic tests. Replace *<DMS-Task-ARN>* with your task ARN.

   ```
   $ dms-report -t <DMS-Task-ARN> -n y
   ```

   This command displays diagnostic data about your replication instance's VPC, network packet transmission, network latency, and network Maximum Transmission Unit (MTU) size.  
![\[Diagnostic tool showing network data.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-diagami3.png)

## Next Steps
<a name="CHAP_SupportAmi_NextSteps"></a>

The following sections describe troubleshooting information based on the results of the network diagnostic tests:

### VPC tests
<a name="CHAP_SupportAmi_NextSteps_Vpc"></a>

This test verifies that the diagnostic Amazon EC2 instance is in the same VPC as the replication instance. If the diagnostic Amazon EC2 instance is not in the same VPC as your replication instance, terminate it and create it again in the correct VPC. You can't change the VPC of an Amazon EC2 instance after you create it.

### Network packet loss tests
<a name="CHAP_SupportAmi_NextSteps_Npl"></a>

This test sends 10 packets to the following endpoints and checks for packet loss: 
+ The AWS DMS Amazon EC2 metadata service on port 80
+ The source endpoint
+ The target endpoint

All packets should arrive successfully. If any packets are lost, consult with a network engineer to determine the problem and find a solution.

### Network latency tests
<a name="CHAP_SupportAmi_NextSteps_Nl"></a>

This test sends 10 packets to the same endpoints as the previous test, and checks for packet latency. All packets should have a latency of less than 100 milliseconds. If any packets have a latency greater than 100 milliseconds, consult with a network engineer to determine the problem and find a solution.

### Maximum Transmission Unit (MTU) size tests
<a name="CHAP_SupportAmi_NextSteps_Mtu"></a>

This test detects the MTU size by using the Traceroute tool on the same endpoints as the previous test. All of the packets in the test should have the same MTU size. If any packets have a different MTU size, consult with a system specialist to determine the problem and find a solution.

## AMI IDs by region
<a name="CHAP_SupportAmi_AmiIDs"></a>

To see a list of available DMS Diagnostic AMIs available in your AWS region, run the following AWS CLI sample.

```
aws ec2 describe-images --owners 343299325021 --filters "Name=name, Values=DMS-DIAG*" --query "sort_by(Images, &CreationDate)[-1].[Name, ImageId, CreationDate]" --output text
```

If the output shows no results, it means the DMS Diagnostic AMI is not available in your AWS region. The workaround is to follow the below steps to copy the Diagnostic AMI from another region. For more information, see [Copy an AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html).
+ Launch an instance in the available region.
+ Create the image. The image will be owned by you.
+ Copy the AMI to your region, for example, Middle East (UAE) Region.
+ Launch the instance in your local region.

## AWS Systems Patch Manager
<a name="CHAP_PatchManager"></a>

AWS Systems Patch Manager automates patching for operating systems and applications on your EC2 instances.

**To configure the patch manager, perform the following steps:**

1. Enable Systems Manager:

   1. Attach the `AmazonSSMManagedInstanceCore` IAM policy to your EC2 instance role.

   1. Ensure SSM Agent is installed (default for Amazon Linux 2, Ubuntu 20.04\$1 AMIs).

1. Create a patch baseline by defining rules for critical/security updates, including schedules for patch application.

1. Schedule updates for applying patches at a defined time by using maintenance windows in SSM.

1. Verify compliance by checking patching status and compliance reports in the Systems Manager.