

# Working with AWS DMS tasks
<a name="CHAP_Tasks"></a>

An AWS Database Migration Service (AWS DMS) task is where all the work happens. You specify what tables (or views) and schemas to use for your migration and any special processing, such as logging requirements, control table data, and error handling. 

A task can consist of three major phases:
+ Migration of existing data (Full load)
+ The application of cached changes
+ Ongoing replication (Change Data Capture)

For more information and an overview of how AWS DMS migration tasks migrate data, see [High-level view of AWS DMS](CHAP_Introduction.HighLevelView.md)

When creating a migration task, you need to know several things:
+ Before you can create a task, make sure that you create a source endpoint, a target endpoint, and a replication instance. 
+ You can specify many task settings to tailor your migration task. You can set these by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS DMS API. These settings include specifying how migration errors are handled, error logging, and control table information. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).
+ After you create a task, you can run it immediately. The target tables with the necessary metadata definitions are automatically created and loaded, and you can specify ongoing replication.
+ By default, AWS DMS starts your task as soon as you create it. However, in some situations, you might want to postpone the start of the task. For example, when using the AWS CLI, you might have a process that creates a task and a different process that starts the task based on some triggering event. As needed, you can postpone your task's start. 
+ You can monitor, stop, or restart tasks using the console, AWS CLI, or AWS DMS API. For information about stopping a task using the AWS DMS API, see [StopReplicationTask](https://docs.aws.amazon.com/dms/latest/APIReference/API_StopReplicationTask.html) in the [AWS DMS API Reference](https://docs.aws.amazon.com/dms/latest/APIReference/).

The following are actions that you can do when working with an AWS DMS task.


| Task | Relevant documentation | 
| --- | --- | 
|   **Creating a task**  When you create a task, you specify the source, target, and replication instance, along with any migration settings.  |  [Creating a task](CHAP_Tasks.Creating.md)  | 
|   **Creating an ongoing replication task**  You can set up a task to provide continuous replication between the source and target.   |  [Creating tasks for ongoing replication using AWS DMS](CHAP_Task.CDC.md)  | 
|   **Applying task settings**  Each task has settings that you can configure according to the needs of your database migration. You create these settings in a JSON file or, with some settings, you can specify the settings using the AWS DMS console. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).  |  [Specifying task settings for AWS Database Migration Service tasks](CHAP_Tasks.CustomizingTasks.TaskSettings.md)  | 
|   **Using table mapping**  Table mapping specifies additional task settings for tables using several types of rules. These rules allows you to specify the data source, source schema, tables and views, data, any table and data transformations that are to occur during the task, and settings for how these tables and columns are migrated from the source to the target.  |  Selection rules  [Selection rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.md) Transformation rules  [Transformation rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.md)Table-settings rules [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md) | 
|   **Running premigration task assessments**  You can enable and run premigration task assessments showing issues with a supported source and target database that can cause problems during a migration. This can include issues such as unsupported data types, mismatched indexes and primary keys, and other conflicting task settings. These premigration assessments run before you run the task to identify potential issues before they occur during a migration.  |  [Enabling and working with premigration assessments for a task](CHAP_Tasks.AssessmentReport.md)  | 
|   **Data validation**  Data validation is a task setting you can use to have AWS DMS compare the data on your target data store with the data from your source data store.  |  [AWS DMS data validation](CHAP_Validating.md).  | 
|   **Modifying a task**  When a task is stopped, you can modify the settings for the task.  |  [Modifying a task](CHAP_Tasks.Modifying.md)  | 
|   **Moving a task**  When a task is stopped, you can move the task to a different replication instance.  |  [Moving a task](CHAP_Tasks.Moving.md)  | 
|   **Reloading tables during a task**  You can reload a table during a task if an error occurs during the task.  |  [Reloading tables during a task](CHAP_Tasks.ReloadTables.md)  | 
|   **Applying filters**  You can use source filters to limit the number and type of records transferred from your source to your target. For example, you can specify that only employees with a location of headquarters are moved to the target database. You apply filters on a column of data.  |  [Using source filters](CHAP_Tasks.CustomizingTasks.Filters.md)  | 
| Monitoring a task There are several ways to get information on the performance of a task and the tables used by the task.  |  [Monitoring AWS DMS tasks](CHAP_Monitoring.md)  | 
| Managing task logs You can view and delete task logs using the AWS DMS API or AWS CLI.   |  [Viewing and managing AWS DMS task logs](CHAP_Monitoring.md#CHAP_Monitoring.ManagingLogs)  | 

**Topics**
+ [

# Creating a task
](CHAP_Tasks.Creating.md)
+ [

# Creating tasks for ongoing replication using AWS DMS
](CHAP_Task.CDC.md)
+ [

# Modifying a task
](CHAP_Tasks.Modifying.md)
+ [

# Moving a task
](CHAP_Tasks.Moving.md)
+ [

# Reloading tables during a task
](CHAP_Tasks.ReloadTables.md)
+ [

# Using table mapping to specify task settings
](CHAP_Tasks.CustomizingTasks.TableMapping.md)
+ [

# Using source filters
](CHAP_Tasks.CustomizingTasks.Filters.md)
+ [

# Enabling and working with premigration assessments for a task
](CHAP_Tasks.AssessmentReport.md)
+ [

# Specifying supplemental data for task settings
](CHAP_Tasks.TaskData.md)

# Creating a task
<a name="CHAP_Tasks.Creating"></a>

To create an AWS DMS migration task, you do the following:
+ Create a source endpoint, a target endpoint, and a replication instance before you create a migration task. 
+ Choose a migration method:
  + **Migrating data to the target database** – This process creates files or tables in the target database and automatically defines the metadata that is required at the target. It also populates the tables with data from the source. The data from the tables is loaded in parallel for improved efficiency. This process is the **Migrate existing data** option in the AWS Management Console and is called `Full Load` in the API.
  + **Capturing changes during migration** – This process captures changes to the source database that occur while the data is being migrated from the source to the target. When the migration of the originally requested data has completed, the change data capture (CDC) process then applies the captured changes to the target database. Changes are captured and applied as units of single committed transactions, and you can update several different target tables as a single source commit. This approach guarantees transactional integrity in the target database. This process is the **Migrate existing data and replicate ongoing changes** option in the console and is called `full-load-and-cdc` in the API.
  + **Replicating only data changes on the source database** – This process reads the recovery log file of the source database management system (DBMS) and groups together the entries for each transaction. In some cases, AWS DMS can't apply changes to the target within a reasonable time (for example, if the target isn't accessible). In these cases, AWS DMS buffers the changes on the replication server for as long as necessary. It doesn't reread the source DBMS logs, which can take a large amount of time. This process is the **Replicate data changes only** option in the AWS DMS console. 
+ Determine how the task should handle large binary objects (LOBs) on the source. For more information, see [Setting LOB support for source databases in an AWS DMS task](CHAP_Tasks.LOBSupport.md).
+ Specify migration task settings. These include setting up logging, specifying what data is written to the migration control table, how errors are handled, and other settings. For more information about task settings, see [Specifying task settings for AWS Database Migration Service tasks](CHAP_Tasks.CustomizingTasks.TaskSettings.md).
+ Set up table mapping to define rules to select and filter data that you are migrating. For more information about table mapping, see [Using table mapping to specify task settings](CHAP_Tasks.CustomizingTasks.TableMapping.md). Before you specify your mapping, make sure that you review the documentation section on data type mapping for your source and your target database. 
+ Enable and run premigration task assessments before you run the task. For more information about premigration assessments, see [Enabling and working with premigration assessments for a task](CHAP_Tasks.AssessmentReport.md).
+ Specify any required supplemental data for the task to migrate your data. For more information, see [Specifying supplemental data for task settings](CHAP_Tasks.TaskData.md).

You can choose to start a task as soon as you finish specifying information for that task on the **Create task** page. Alternatively, you can start the task from the Dashboard page later as well.

The following procedure assumes that you have already specified replication instance information and endpoints. For more information about setting up endpoints, see [Creating source and target endpoints](CHAP_Endpoints.Creating.md).

**To create a migration task**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/). 

   If you are signed in as an AWS Identity and Access Management (IAM) user, make sure that you have the appropriate permissions to access AWS DMS. For more information about the permissions required, see [IAM permissions needed to use AWS DMS](security-iam.md#CHAP_Security.IAMPermissions).

1. On the navigation pane, choose **Database migration tasks**, and then choose **Create task**.

1. On the **Create database migration task** page, in the **Task configuration** section, specify the task options. The following table describes the settings.  
![\[Create task\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-gs-wizard4.png)    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html)

1. In the **Task Settings** section, specify values for editing your task, target table preparation mode, stop task, LOB settings, validation, and logging.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html)

1. In the **Premigration assessment** section, choose whether to run a premigration assessment. A premigration assessment warns you of potential migration issues before starting your database migration task. For more information, see [Enabling and working with premigration assessments](CHAP_Tasks.AssessmentReport.md). 

1. In the **Migration task startup configuration** section, specify whether to start the task automatically after creation.

1. In the **Tags** section, specify any tags you need to organize your task. You can use tags to manage your IAM roles and policies, and track your DMS costs. For more information, see [Tagging resources](CHAP_Tagging.md).

1. After you have finished with the task settings, choose **Create task**.

# Specifying task settings for AWS Database Migration Service tasks
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings"></a>

Each task has settings that you can configure according to the needs of your database migration. You create these settings in a JSON file or, with some settings, you can specify the settings using the AWS DMS console. For information about how to use a task configuration file to set task settings, see [Task settings example](#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

There are several main types of task settings, as listed following.

**Topics**
+ [

## Task settings example
](#CHAP_Tasks.CustomizingTasks.TaskSettings.Example)
+ [

# Target metadata task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md)
+ [

# Full-load task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.md)
+ [

# Time Travel task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.md)
+ [

# Logging task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.md)
+ [

# Control table task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.ControlTable.md)
+ [

# Stream buffer task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.StreamBuffer.md)
+ [

# Change processing tuning settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.md)
+ [

# Data validation task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.DataValidation.md)
+ [

# Data resync settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.DataResyncSettings.md)
+ [

# Task settings for change processing DDL handling
](CHAP_Tasks.CustomizingTasks.TaskSettings.DDLHandling.md)
+ [

# Character substitution task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.CharacterSubstitution.md)
+ [

# Before image task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.BeforeImage.md)
+ [

# Error handling task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.ErrorHandling.md)
+ [

# Saving task settings
](CHAP_Tasks.CustomizingTasks.TaskSettings.Saving.md)


| Task settings | Relevant documentation | 
| --- | --- | 
|   **Creating a task assessment report**  You can create a task assessment report that shows any unsupported data types that could cause problems during migration. You can run this report on your task before running the task to find out potential issues.  |  [Enabling and working with premigration assessments for a task](CHAP_Tasks.AssessmentReport.md)  | 
|   **Creating a task**  When you create a task, you specify the source, target, and replication instance, along with any migration settings.  |  [Creating a task](CHAP_Tasks.Creating.md)  | 
|   **Creating an ongoing replication task**  You can set up a task to provide continuous replication between the source and target.   |  [Creating tasks for ongoing replication using AWS DMS](CHAP_Task.CDC.md)  | 
|   **Applying task settings**  Each task has settings that you can configure according to the needs of your database migration. You create these settings in a JSON file or, with some settings, you can specify the settings using the AWS DMS console.  |  [Specifying task settings for AWS Database Migration Service tasks](#CHAP_Tasks.CustomizingTasks.TaskSettings)  | 
|   **Data validation**  Use data validation to have AWS DMS compare the data on your target data store with the data from your source data store.  |  [AWS DMS data validation](CHAP_Validating.md)  | 
|   **Modifying a task**  When a task is stopped, you can modify the settings for the task.  |  [Modifying a task](CHAP_Tasks.Modifying.md)  | 
|   **Reloading tables during a task**  You can reload a table during a task if an error occurs during the task.  |  [Reloading tables during a task](CHAP_Tasks.ReloadTables.md)  | 
|   **Using table mapping**  Table mapping uses several types of rules to specify task settings for the data source, source schema, data, and any transformations that should occur during the task.  |  Selection Rules [Selection rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.md) Transformation Rules [Transformation rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.md)  | 
|   **Applying filters**  You can use source filters to limit the number and type of records transferred from your source to your target. For example, you can specify that only employees with a location of headquarters are moved to the target database. You apply filters on a column of data.  |  [Using source filters](CHAP_Tasks.CustomizingTasks.Filters.md)  | 
| Monitoring a task There are several ways to get information on the performance of a task and the tables used by the task.  |  [Monitoring AWS DMS tasks](CHAP_Monitoring.md)  | 
| Managing task logs You can view and delete task logs using the AWS DMS API or AWS CLI.   |  [Viewing and managing AWS DMS task logs](CHAP_Monitoring.md#CHAP_Monitoring.ManagingLogs)  | 

## Task settings example
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.Example"></a>

You can use either the AWS Management Console or the AWS CLI to create a replication task. If you use the AWS CLI, you set task settings by creating a JSON file, then specifying the file:// URI of the JSON file as the [ ReplicationTaskSettings](https://docs.aws.amazon.com/dms/latest/APIReference/API_CreateReplicationTask.html#DMS-CreateReplicationTask-request-ReplicationTaskSettings) parameter of the [CreateReplicationTask](https://docs.aws.amazon.com/dms/latest/APIReference/API_CreateReplicationTask.html) operation.

The following example shows how to use the AWS CLI to call the `CreateReplicationTask` operation:

```
aws dms create-replication-task \
--replication-task-identifier MyTask \
--source-endpoint-arn arn:aws:dms:us-west-2:123456789012:endpoint:ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABC \
--target-endpoint-arn arn:aws:dms:us-west-2:123456789012:endpoint:ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABC \
--replication-instance-arn arn:aws:dms:us-west-2:123456789012:rep:ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890ABC \
--migration-type cdc \
--table-mappings file://tablemappings.json \
--replication-task-settings file://settings.json
```

The preceding example uses a table mapping file called `tablemappings.json`. For table mapping examples, see [Using table mapping to specify task settings](CHAP_Tasks.CustomizingTasks.TableMapping.md).

A task settings JSON file can look like the following. 

```
{
  "TargetMetadata": {
    "TargetSchema": "",
    "SupportLobs": true,
    "FullLobMode": false,
    "LobChunkSize": 64,
    "LimitedSizeLobMode": true,
    "LobMaxSize": 32,
    "InlineLobMaxSize": 0,
    "LoadMaxFileSize": 0,
    "ParallelLoadThreads": 0,
    "ParallelLoadBufferSize":0,
    "ParallelLoadQueuesPerThread": 1,
    "ParallelApplyThreads": 0,
    "ParallelApplyBufferSize": 100,
    "ParallelApplyQueuesPerThread": 1,    
    "BatchApplyEnabled": false,
    "TaskRecoveryTableEnabled": false
  },
  "FullLoadSettings": {
    "TargetTablePrepMode": "DO_NOTHING",
    "CreatePkAfterFullLoad": false,
    "StopTaskCachedChangesApplied": false,
    "StopTaskCachedChangesNotApplied": false,
    "MaxFullLoadSubTasks": 8,
    "TransactionConsistencyTimeout": 600,
    "CommitRate": 10000
  },
    "TTSettings" : {
    "EnableTT" : true,
    "TTS3Settings": {
        "EncryptionMode": "SSE_KMS",
        "ServerSideEncryptionKmsKeyId": "arn:aws:kms:us-west-2:112233445566:key/myKMSKey",
        "ServiceAccessRoleArn": "arn:aws:iam::112233445566:role/dms-tt-s3-access-role",
        "BucketName": "myttbucket",
        "BucketFolder": "myttfolder",
        "EnableDeletingFromS3OnTaskDelete": false
      },
    "TTRecordSettings": {
        "EnableRawData" : true,
        "OperationsToLog": "DELETE,UPDATE",
        "MaxRecordSize": 64
      }
  },
  "Logging": {
    "EnableLogging": false
  },
  "ControlTablesSettings": {
    "ControlSchema":"",
    "HistoryTimeslotInMinutes":5,
    "HistoryTableEnabled": false,
    "SuspendedTablesTableEnabled": false,
    "StatusTableEnabled": false
  },
  "StreamBufferSettings": {
    "StreamBufferCount": 3,
    "StreamBufferSizeInMB": 8
  },
  "ChangeProcessingTuning": { 
    "BatchApplyPreserveTransaction": true, 
    "BatchApplyTimeoutMin": 1, 
    "BatchApplyTimeoutMax": 30, 
    "BatchApplyMemoryLimit": 500, 
    "BatchSplitSize": 0, 
    "MinTransactionSize": 1000, 
    "CommitTimeout": 1, 
    "MemoryLimitTotal": 1024, 
    "MemoryKeepTime": 60, 
    "StatementCacheSize": 50 
  },
  "ChangeProcessingDdlHandlingPolicy": {
    "HandleSourceTableDropped": true,
    "HandleSourceTableTruncated": true,
    "HandleSourceTableAltered": true
  },
  "LoopbackPreventionSettings": {
    "EnableLoopbackPrevention": true,
    "SourceSchema": "LOOP-DATA",
    "TargetSchema": "loop-data"
  },

  "CharacterSetSettings": {
    "CharacterReplacements": [ {
        "SourceCharacterCodePoint": 35,
        "TargetCharacterCodePoint": 52
      }, {
        "SourceCharacterCodePoint": 37,
        "TargetCharacterCodePoint": 103
      }
    ],
    "CharacterSetSupport": {
      "CharacterSet": "UTF16_PlatformEndian",
      "ReplaceWithCharacterCodePoint": 0
    }
  },
  "BeforeImageSettings": {
    "EnableBeforeImage": false,
    "FieldName": "",  
    "ColumnFilter": "pk-only"
  },
  "ErrorBehavior": {
    "DataErrorPolicy": "LOG_ERROR",
    "DataTruncationErrorPolicy":"LOG_ERROR",
    "DataMaskingErrorPolicy": "STOP_TASK",
    "DataErrorEscalationPolicy":"SUSPEND_TABLE",
    "DataErrorEscalationCount": 50,
    "TableErrorPolicy":"SUSPEND_TABLE",
    "TableErrorEscalationPolicy":"STOP_TASK",
    "TableErrorEscalationCount": 50,
    "RecoverableErrorCount": 0,
    "RecoverableErrorInterval": 5,
    "RecoverableErrorThrottling": true,
    "RecoverableErrorThrottlingMax": 1800,
    "ApplyErrorDeletePolicy":"IGNORE_RECORD",
    "ApplyErrorInsertPolicy":"LOG_ERROR",
    "ApplyErrorUpdatePolicy":"LOG_ERROR",
    "ApplyErrorEscalationPolicy":"LOG_ERROR",
    "ApplyErrorEscalationCount": 0,
    "FullLoadIgnoreConflicts": true
  },
  "ValidationSettings": {
    "EnableValidation": false,
    "ValidationMode": "ROW_LEVEL",
    "ThreadCount": 5,
    "PartitionSize": 10000,
    "FailureMaxCount": 1000,
    "RecordFailureDelayInMinutes": 5,
    "RecordSuspendDelayInMinutes": 30,
    "MaxKeyColumnSize": 8096,
    "TableFailureMaxCount": 10000,
    "ValidationOnly": false,
    "HandleCollationDiff": false,
    "RecordFailureDelayLimitInMinutes": 1,
    "SkipLobColumns": false,
    "ValidationPartialLobSize": 0,
    "ValidationQueryCdcDelaySeconds": 0
  }
}
```

# Target metadata task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata"></a>

Target metadata settings include the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).
+ `TargetSchema` – The target table schema name. If this metadata option is empty, the schema from the source table is used. AWS DMS automatically adds the owner prefix for the target database to all tables if no source schema is defined. This option should be left empty for MySQL-type target endpoints. Renaming a schema in data mapping takes precedence over this setting.
+ LOB settings – Settings that determine how large objects (LOBs) are managed. If you set `SupportLobs=true`, you must set one of the following to `true`: 
  + `FullLobMode` – If you set this option to `true`, then you must enter a value for the `LobChunkSize` option. Enter the size, in kilobytes, of the LOB chunks to use when replicating the data to the target. The `FullLobMode` option works best for very large LOB sizes but tends to cause slower loading. The recommended value for `LobChunkSize` is 64 kilobytes. Increasing the value for `LobChunkSize` above 64 kilobytes can cause task failures.
  + `InlineLobMaxSize` – This value determines which LOBs AWS DMS transfers inline during a full load. Transferring small LOBs is more efficient than looking them up from a source table. During a full load, AWS DMS checks all LOBs and performs an inline transfer for the LOBs that are smaller than `InlineLobMaxSize`. AWS DMS transfers all LOBs larger than the `InlineLobMaxSize` in `FullLobMode`. The default value for `InlineLobMaxSize` is 0 and the range is 1 –102400 kilobytes (100 MB). Set a value for `InlineLobMaxSize` only if you know that most of the LOBs are smaller than the value specified in `InlineLobMaxSize`.
  + `LimitedSizeLobMode` – If you set this option to `true`, then you must enter a value for the `LobMaxSize` option. Enter the maximum size, in kilobytes, for an individual LOB. The maximum value for `LobMaxSize` is 102400 kilobytes (100 MB).

  For more information about the criteria for using these task LOB settings, see [Setting LOB support for source databases in an AWS DMS task](CHAP_Tasks.LOBSupport.md). You can also control the management of LOBs for individual tables. For more information, see [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md).
+ `BatchApplyEnabled` – Determines if each transaction is applied individually or if changes are committed in batches. The default value is `false`.

  When `BatchApplyEnabled` is set to `true`, DMS requires a Primary Key (PK) or Unique Key (UK) on the **source** table(s). Without a PK or UK on source tables, only batch inserts are applied but not batch updates and deletes.

  When `BatchApplyEnabled` is set to `true`, AWS DMS generates an error message if a **target** table has a unique constraint and a primary key. Target tables with both a unique constraint and primary key aren't supported when `BatchApplyEnabled` is set to `true`.

  When `BatchApplyEnabled` is set to true and AWS DMS encounters a data error from a table with the default error-handling policy, the AWS DMS task switches from batch mode to one-by-one mode for the rest of the tables. To alter this behavior, you can set the `"SUSPEND_TABLE"` action on the following policies in the `"ErrorBehavior"` group property of the task settings JSON file:
  + `DataErrorPolicy`
  + `ApplyErrorDeletePolicy`
  + `ApplyErrorInsertPolicy`
  + `ApplyErrorUpdatePolicy`

  For more information on this `"ErrorBehavior"` group property, see the example task settings JSON file in [Specifying task settings for AWS Database Migration Service tasks](CHAP_Tasks.CustomizingTasks.TaskSettings.md). After setting these policies to `"SUSPEND_TABLE"`, the AWS DMS task then suspends data errors on any tables that raise them and continues in batch mode for all tables.

  You can use the `BatchApplyEnabled` parameter with the `BatchApplyPreserveTransaction` parameter. If `BatchApplyEnabled` is set to `true`, then the `BatchApplyPreserveTransaction` parameter determines the transactional integrity. 

  If `BatchApplyPreserveTransaction` is set to `true`, then transactional integrity is preserved and a batch is guaranteed to contain all the changes within a transaction from the source.

  If `BatchApplyPreserveTransaction` is set to `false`, then there can be temporary lapses in transactional integrity to improve performance. 

  The `BatchApplyPreserveTransaction` parameter applies only to Oracle target endpoints, and is only relevant when the `BatchApplyEnabled` parameter is set to `true`.

  When LOB columns are included in the replication, you can use `BatchApplyEnabled` only in limited LOB mode.

  For more information about using these settings for a change data capture (CDC) load, see [Change processing tuning settings](CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.md).
+ `MaxFullLoadSubTasks` – indicates the maximum number of tables to load in parallel. The default is 8; the maximum value is 49.
+ `ParallelLoadThreads` – Specifies the number of threads that AWS DMS uses to load each table into the target database. This parameter has maximum values for non-RDBMS targets. The maximum value for a DynamoDB target is 200. The maximum value for an Amazon Kinesis Data Streams, Apache Kafka, or Amazon OpenSearch Service target is 32. You can ask to have this maximum limit increased. `ParallelLoadThreads` applies to Full Load tasks. For information on the settings for parallel load of individual tables, see [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md).

  This setting applies to the following endpoint engine types:
  + DynamoDB
  + Amazon Kinesis Data Streams
  + Amazon MSK
  + Amazon OpenSearch Service
  + Amazon Redshift

  AWS DMS supports `ParallelLoadThreads` for MySQL as an extra connection attribute. `ParallelLoadThreads` does not apply to MySQL as a task setting. 
+ `ParallelLoadBufferSize` Specifies the maximum number of records to store in the buffer that the parallel load threads use to load data to the target. The default value is 50. The maximum value is 1,000. This setting is currently only valid when DynamoDB, Kinesis, Apache Kafka, or OpenSearch is the target. Use this parameter with `ParallelLoadThreads`. `ParallelLoadBufferSize` is valid only when there is more than one thread. For information on the settings for parallel load of individual tables, see [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md).
+ `ParallelLoadQueuesPerThread` – Specifies the number of queues that each concurrent thread accesses to take data records out of queues and generate a batch load for the target. The default is 1. This setting is currently only valid when Kinesis or Apache Kafka is the target.
+ `ParallelApplyThreads` – Specifies the number of concurrent threads that AWS DMS uses during a CDC load to push data records to an Amazon DocumentDB, Kinesis, Amazon MSK, OpenSearch, or Amazon Redshift target endpoint. The default is zero (0).

  This setting only applies for CDC-only. This setting does not apply for Full Load.

  

  This setting applies to the following endpoint engine types:
  + Amazon DocumentDB (with MongoDB compatibility)
  + Amazon Kinesis Data Streams
  + Amazon Managed Streaming for Apache Kafka
  + Amazon OpenSearch Service
  + Amazon Redshift
+ `ParallelApplyBufferSize` – Specifies the maximum number of records to store in each buffer queue for concurrent threads to push to an Amazon DocumentDB, Kinesis, Amazon MSK, OpenSearch, or Amazon Redshift target endpoint during a CDC load. The default value is 100. The maximum value is 1000. Use this option when `ParallelApplyThreads` specifies more than one thread. 
+ `ParallelApplyQueuesPerThread` – Specifies the number of queues that each thread accesses to take data records out of queues and generate a batch load for an Amazon DocumentDB, Kinesis, Amazon MSK, or OpenSearch endpoint during CDC. The default value is 1.

# Full-load task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad"></a>

Full-load settings include the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).
+ To indicate how to handle loading the target at full-load startup, specify one of the following values for the `TargetTablePrepMode` option: 
  +  `DO_NOTHING` – Data and metadata of the existing target table aren't affected. 
  +  `DROP_AND_CREATE` – The existing table is dropped and a new table is created in its place. 
  +  `TRUNCATE_BEFORE_LOAD` – Data is truncated without affecting the table metadata.
+ To delay primary key or unique index creation until after a full load completes, set the `CreatePkAfterFullLoad` option to `true`.
+ For full-load and CDC-enabled tasks, you can set the following options for `Stop task after full load completes`: 
  + `StopTaskCachedChangesApplied` – Set this option to `true` to stop a task after a full load completes and cached changes are applied. 
  + `StopTaskCachedChangesNotApplied` – Set this option to `true` to stop a task before cached changes are applied. 
+ To indicate the maximum number of tables to load in parallel, set the `MaxFullLoadSubTasks` option. The default is 8; the maximum value is 49.
+ Set the `ParallelLoadThreads` option to indicate how many concurrent threads DMS will employ during a full-load process to push data records to a target endpoint. Zero is the default value (0).
**Important**  
`MaxFullLoadSubTasks` controls the number of tables or table segments to load in parallel. `ParallelLoadThreads` controls the number of threads that are used by a migration task to execute the loads in parallel. *These settings are multiplicative*. As such, the total number of threads that are used during a full load task is approximately the result of the value of `ParallelLoadThreads `multiplied by the value of `MaxFullLoadSubTasks` (`ParallelLoadThreads` **\$1** `MaxFullLoadSubtasks)`.  
If you create tasks with a high number of Full Load sub tasks and a high number of parallel load threads, your task can consume too much memory and fail.
+ You can set the number of seconds that AWS DMS waits for transactions to close before beginning a full-load operation. To do so, if transactions are open when the task starts set the `TransactionConsistencyTimeout` option. The default value is 600 (10 minutes). AWS DMS begins the full load after the timeout value is reached, even if there are open transactions. A full-load-only task doesn't wait for 10 minutes but instead starts immediately.
+ To indicate the maximum number of records that can be transferred together, set the `CommitRate` option. The default value is 10000, and the maximum value is 50000.

# Time Travel task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel"></a>

To log and debug replication tasks, you can use AWS DMS Time Travel. In this approach, you use Amazon S3 to store logs and encrypt them using your encryption keys. Only with access to your Time Travel S3 bucket, can you retrieve your S3 logs using date-time filters, then view, download, and obfuscate logs as needed. By doing this, you can securely "travel back in time" to investigate database activities. Time Travel works independently from the CloudWatch logging. For more information on CloudWatch logging, see [Logging task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.md). 

You can use Time Travel in all AWS Regions with AWS DMS-supported Oracle, Microsoft SQL Server, and PostgreSQL source endpoints, and AWS DMS-supported PostgreSQL and MySQL target endpoints. You can turn on Time Travel only for full-load and change data capture (CDC) tasks and for CDC-only tasks. To turn on Time Travel or to modify any existing Time Travel settings, ensure that your replication task is stopped.

The Time Travel settings include the `TTSettings` properties following:
+ `EnableTT` – If this option is set to `true`, Time Travel logging is turned on for the task. The default value is `false`.

  Type: Boolean

  Required: No
+ `EncryptionMode` – The type of server-side encryption being used on your S3 bucket to store your data and logs. You can specify either `"SSE_S3"` (the default) or `"SSE_KMS"`.

  You can change `EncryptionMode` from `"SSE_KMS"` to `"SSE_S3"`, but not the reverse.

  Type: String

  Required: No
+ `ServerSideEncryptionKmsKeyId` – If you specify `"SSE_KMS"` for `EncryptionMode`, provide the ID for your custom managed AWS KMS key. Make sure that the key that you use has an attached policy that turns on AWS Identity and Access Management (IAM) user permissions and allows use of the key. 

  Only your own custom-managed symmetric KMS key is supported with the `"SSE_KMS"` option.

  Type: String

  Required: Only if you set `EncryptionMode` to `"SSE_KMS"`
+ `ServiceAccessRoleArn` – The Amazon Resource Name (ARN) used by the service to access the IAM role. Set the role name to `dms-tt-s3-access-role`. This is a required setting that allows AWS DMS to write and read objects from an S3 bucket.

  Type: String

  Required: If Time Travel is turned on

  Following is an example policy for this role.

------
#### [ JSON ]

****  

  ```
  {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject",
                  "kms:GenerateDataKey",
                  "kms:Decrypt",
                  "s3:ListBucket",
                  "s3:DeleteObject"
              ],
              "Resource": [
                  "arn:aws:s3:::S3bucketName*",
                  "arn:aws:kms:us-east-1:112233445566:key/1234a1a1-1m2m-1z2z-d1d2-12dmstt1234"
              ]
          }
      ]
  }
  ```

------

  Following is an example trust policy for this role.

------
#### [ JSON ]

****  

  ```
  {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": [
                       "dms.amazonaws.com"
                   ]
               },
               "Action": "sts:AssumeRole"
          }
      ]
  }
  ```

------
+ `BucketName` – The name of the S3 bucket to store Time Travel logs. Make sure to create this S3 bucket before turning on Time Travel logs.

  Type: String

  Required: If Time Travel is turned on
+ `BucketFolder` – An optional parameter to set a folder name in the S3 bucket. If you specify this parameter, DMS creates the Time Travel logs in the path `"/BucketName/BucketFolder/taskARN/YYYY/MM/DD/hh"`. If you don't specify this parameter, AWS DMS creates the default path as `"/BucketName/dms-time-travel-logs/taskARN/YYYY/MM/DD/hh`.

  Type: String

  Required: No
+ `EnableDeletingFromS3OnTaskDelete` – When this option is set to `true`, AWS DMS deletes the Time Travel logs from S3 if the task is deleted. The default value is `false`.

  Type: String

  Required: No
+ `EnableRawData` – When this option is set to `true`, the data manipulation language (DML) raw data for Time Travel logs appears under the `raw_data` column of the Time Travel logs. For the details, see [Using the Time Travel logs](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.LogSchema.md). The default value is `false`. When this option is set to `false`, only the type of DML is captured.

  Type: String

  Required: No
+ `RawDataFormat` – In AWS DMS versions 3.5.0 and higher, when `EnableRawData` is set to `true`. This property specifies a format for the raw data of the DML in a Time Travel log and can be presented as:
  + `"TEXT"` – Parsed, readable column names and values for DML events captured during CDC as `Raw` fields.
  + `"HEX"` – The original hexidecimal for column names and values captured for DML events during CDC.

  This property applies to Oracle and Microsoft SQL Server database sources.

  Type: String

  Required: No
+ `OperationsToLog` – Specifies the type of DML operations to log in Time Travel logs. You can specify one of the following:
  + `"INSERT"`
  + `"UPDATE"`
  + `"DELETE"`
  + `"COMMIT"`
  + `"ROLLBACK"`
  + `"ALL"`

  The default is `"ALL"`.

  Type: String

  Required: No
+ `MaxRecordSize` – Specifies the maximum size of Time Travel log records that are logged for each row. Use this property to control the growth of Time Travel logs for especially busy tables. The default is 64 KB.

  Type: Integer

  Required: No

For more information on turning on and using Time Travel logs, see the following topics.

**Topics**
+ [

# Turning on the Time Travel logs for a task
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.TaskEnabling.md)
+ [

# Using the Time Travel logs
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.LogSchema.md)
+ [

# How often AWS DMS uploads Time Travel logs to S3
](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.UploadsToS3.md)

# Turning on the Time Travel logs for a task
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.TaskEnabling"></a>

You can turn on Time Travel for an AWS DMS task using the task settings described previously. Make sure that your replication task is stopped before you turn on Time Travel.

**To turn on Time Travel using the AWS CLI**

1. Create a DMS task configuration JSON file and add a `TTSettings` section such as the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

   ```
    .
    .
    .
       },
   "TTSettings" : {
     "EnableTT" : true,
     "TTS3Settings": {
         "EncryptionMode": "SSE_KMS",
         "ServerSideEncryptionKmsKeyId": "arn:aws:kms:us-west-2:112233445566:key/myKMSKey",
         "ServiceAccessRoleArn": "arn:aws:iam::112233445566:role/dms-tt-s3-access-role",
         "BucketName": "myttbucket",
         "BucketFolder": "myttfolder",
         "EnableDeletingFromS3OnTaskDelete": false
       },
     "TTRecordSettings": {
         "EnableRawData" : true,
         "OperationsToLog": "DELETE,UPDATE",
         "MaxRecordSize": 64
       },
    .
    .
    .
   ```

1. In an appropriate task action, specify this JSON file using the `--replication-task-settings` option. For example, the CLI code fragment following specifies this Time Travel settings file as part of `create-replication-task`.

   ```
   aws dms create-replication-task 
   --target-endpoint-arn arn:aws:dms:us-east-1:112233445566:endpoint:ELS5O7YTYV452CAZR2EYBNQGILFHQIFVPWFRQAY \
   --source-endpoint-arn arn:aws:dms:us-east-1:112233445566:endpoint:HNX2BWIIN5ZYFF7F6UFFZVWTDFFSMTNOV2FTXZA \
   --replication-instance-arn arn:aws:dms:us-east-1:112233445566:rep:ERLHG2UA52EEJJKFYNYWRPCG6T7EPUAB5AWBUJQ \
   --migration-type full-load-and-cdc --table-mappings 'file:///FilePath/mappings.json' \
   --replication-task-settings 'file:///FilePath/task-settings-tt-enabled.json' \
   --replication-task-identifier test-task
                               .
                               .
                               .
   ```

   Here, the name of this Time Travel settings file is `task-settings-tt-enabled.json`.

Similarly, you can specify this file as part of the `modify-replication-task` action.

Note the special handling of Time Travel logs for the task actions following:
+ `start-replication-task` – When you run a replication task, if an S3 bucket used for Time Travel isn't accessible, the task is marked as `FAILED`.
+ `stop-replication-task` – When the task stops, AWS DMS immediately pushes all Time Travel logs that are currently available for the replication instance to the S3 bucket used for Time Travel.

While a replication task runs, you can change the `EncryptionMode` value from `"SSE_KMS"` to `"SSE_S3"` but not the reverse.

If the size of Time Travel logs for an ongoing task exceeds 1 GB, DMS pushes the logs to S3 within five minutes of reaching that size. After a task is running, if the S3 bucket or KMS key becomes inaccessible, DMS stops pushing logs to this bucket. If you find your logs aren't being pushed to your S3 bucket, check your S3 and AWS KMS permissions. For more details on how often DMS pushes these logs to S3, see [How often AWS DMS uploads Time Travel logs to S3](CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.UploadsToS3.md).

To turn on Time Travel for an existing task from the console, use the JSON editor option under **Task Settings** to add a `TTSettings` section.

# Using the Time Travel logs
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.LogSchema"></a>

*Time Travel log files* are comma-separated value (CSV) files with the fields following.

```
log_timestamp 
component 
dms_source_code_location 
transaction_id 
event_id 
event_timestamp 
lsn/scn 
primary_key
record_type 
event_type 
schema_name 
table_name 
statement 
action 
result 
raw_data
```

After your Time Travel logs are available in S3, you can directly access and query them with tools such as Amazon Athena. Or you can download the logs as you can any file from S3.

The example following shows a Time Travel log where transactions for a table called `mytable` are logged. The line endings for the following log are added for readability.

```
"log_timestamp ","tt_record_type","dms_source_code_location ","transaction_id",
"event_id","event_timestamp","scn_lsn","primary_key","record_type","event_type",
"schema_name","table_name","statement","action","result","raw_data"
"2021-09-23T01:03:00:778230","SOURCE_CAPTURE","postgres_endpoint_wal_engine.c:00819",
"609284109","565612992","2021-09-23 01:03:00.765321+00","00000E9C/D53AB518","","DML",
"UPDATE (3)","dmstest","mytable","","Migrate","","table dmstest.mytable:
UPDATE: id[bigint]:2244937 phone_number[character varying]:'phone-number-482'
age[integer]:82 gender[character]:'f' isactive[character]:'true ' 
date_of_travel[timestamp without time zone]:'2021-09-23 01:03:00.76593' 
description[text]:'TEST DATA TEST DATA TEST DATA TEST DATA'"
```

# How often AWS DMS uploads Time Travel logs to S3
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.UploadsToS3"></a>

To minimize the storage usage of your replication instance, AWS DMS offloads Time Travel logs from it periodically. 

The Time travel logs get pushed to your Amazon S3 bucket in the cases following:
+ If the current size of logs exceeds 1 GB, AWS DMS uploads the logs to S3 within five minutes. Thus, AWS DMS can make up to 12 calls an hour to S3 and AWS KMS for each running task.
+ AWS DMS uploads the logs to S3 every hour, regardless of the size of the logs.
+ When a task is stopped, AWS DMS immediately uploads the time travel logs to S3.

# Logging task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.Logging"></a>

Logging uses Amazon CloudWatch to log information during the migration process. Using logging task settings, you can specify which component activities are logged and what amount of information is written to the log. Logging task settings are written to a JSON file. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

You can turn on CloudWatch logging in several ways. You can select the `EnableLogging` option on the AWS Management Console when you create a migration task. Or, you can set the `EnableLogging` option to `true` when creating a task using the AWS DMS API. You can also specify `"EnableLogging": true` in the JSON of the logging section of task settings.

When you set `EnableLogging` to `true`, AWS DMS assigns the CloudWatch group name and stream name as follows. You can't set these values directly.
+ **CloudWatchLogGroup**: `dms-tasks-<REPLICATION_INSTANCE_IDENTIFIER>`
+ **CloudWatchLogStream**: `dms-task-<REPLICATION_TASK_EXTERNAL_RESOURCE_ID>`

`<REPLICATION_INSTANCE_IDENTIFIER>` is the identifier of the replication instance. `<REPLICATION_TASK_EXTERNAL_RESOURCE_ID>` is the value of the `<resourcename>` section of the Task ARN. For information about how AWS DMS generates resource ARNs, see [Constructing an Amazon Resource Name (ARN) for AWS DMS](CHAP_Introduction.AWS.ARN.md).

CloudWatch integrates with AWS Identity and Access Management (IAM), and you can specify which CloudWatch actions a user in your AWS account can perform. For more information about working with IAM in CloudWatch, see [Identity and access management for amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/auth-and-access-control-cw.html) and [Logging Amazon CloudWatch API calls](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/logging_cw_api_calls.html) in the *Amazon CloudWatch User Guide.*

To delete the task logs, you can set `DeleteTaskLogs` to true in the JSON of the logging section of the task settings.

You can specify logging for the following types of events:
+ `FILE_FACTORY` – The file factory manages files used for batch apply and batch load, and manages Amazon S3 endpoints.
+ `METADATA_MANAGER` – The metadata manager manages source and target metadata, partitioning, and table state during replication.
+ `SORTER` – The `SORTER` receives incoming events from the `SOURCE_CAPTURE` process. The events are batched in transactions, and passed to the `TARGET_APPLY` service component. If the `SOURCE_CAPTURE` process produces events faster than the `TARGET_APPLY` component can consume them, the `SORTER` component caches the backlogged events to disk or to a swap file. Cached events are a common cause for running out of storage in replication instances.

  The `SORTER` service component manages cached events, gathers CDC statistics, and reports task latency.
+ `SOURCE_CAPTURE` – Ongoing replication (CDC) data is captured from the source database or service, and passed to the SORTER service component.
+ `SOURCE_UNLOAD` – Data is unloaded from the source database or service during Full Load.
+ `TABLES_MANAGER` — The table manager tracks captured tables, manages the order of table migration, and collects table statistics.
+ `TARGET_APPLY` – Data and data definition language (DDL) statements are applied to the target database.
+ `TARGET_LOAD` – Data is loaded into the target database.
+ `TASK_MANAGER` – The task manager manages running tasks, and breaks tasks down into sub-tasks for parallel data processing.
+ `TRANSFORMATION` – Table-mapping transformation events. For more information, see [Using table mapping to specify task settings](CHAP_Tasks.CustomizingTasks.TableMapping.md).
+ `VALIDATOR/ VALIDATOR_EXT` – The `VALIDATOR` service component verifies that data was migrated accurately from the source to the target. For more information, see [Data validation](CHAP_Validating.md). 
+ `DATA_RESYNC` – Common component of Data resync feature that manages data resync flow. For more information, see [AWS DMS data resync](CHAP_Validating.DataResync.md).
+ `RESYNC_UNLOAD` – Data is unloaded from the source database or service during resync process.
+ `RESYNC_APPLY` – Data manipulation language (DML) statements are applied to the target database during resync.

The following logging components generate a large amount of logs when using the `LOGGER_SEVERITY_DETAILED_DEBUG` log severity level:
+ `COMMON`
+ `ADDONS`
+ `DATA_STRUCTURE`
+ `COMMUNICATION`
+ `FILE_TRANSFER`
+ `FILE_FACTORY`

Logging levels other than `DEFAULT` are rarely needed for these components during troubleshooting. We do not recommend changing the logging level from `DEFAULT` for these components unless specifically requested by AWS Support.

After you specify one of the preceding, you can then specify the amount of information that is logged, as shown in the following list. 

The levels of severity are in order from lowest to highest level of information. The higher levels always include information from the lower levels. 
+ LOGGER\$1SEVERITY\$1ERROR – Error messages are written to the log.
+ LOGGER\$1SEVERITY\$1WARNING – Warnings and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1INFO – Informational messages, warnings, and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1DEFAULT – Informational messages, warnings, and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1DEBUG – Debug messages, informational messages, warnings, and error messages are written to the log.
+ LOGGER\$1SEVERITY\$1DETAILED\$1DEBUG – All information is written to the log.

The following JSON example shows task settings for logging all actions and levels of severity.

```
…
  "Logging": {
    "EnableLogging": true,
    "LogComponents": [
      {
        "Id": "FILE_FACTORY",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "METADATA_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "SORTER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "SOURCE_CAPTURE",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "SOURCE_UNLOAD",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "TABLES_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "TARGET_APPLY",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      },{
        "Id": "TARGET_LOAD",
        "Severity": "LOGGER_SEVERITY_INFO"
      },{
        "Id": "TASK_MANAGER",
        "Severity": "LOGGER_SEVERITY_DEBUG"
      },{
        "Id": "TRANSFORMATION",
        "Severity": "LOGGER_SEVERITY_DEBUG"
      },{
        "Id": "VALIDATOR",
        "Severity": "LOGGER_SEVERITY_DEFAULT"
      }
    ],
    "CloudWatchLogGroup": null,
    "CloudWatchLogStream": null
  }, 
…
```

# Control table task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.ControlTable"></a>

Control tables provide information about an AWS DMS task. They also provide useful statistics that you can use to plan and manage both the current migration task and future tasks. You can apply these task settings in a JSON file or by choosing **Advanced Settings** on the **Create task** page in the AWS DMS console. The Apply Exceptions table (`dmslogs.awsdms_apply_exceptions`) is always created on database targets. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

AWS DMS only creates control tables only during Full Load \$1 CDC or CDC-only tasks, and not during Full Load Only tasks. 

For full load and CDC (Migrate existing data and replicate ongoing changes) and CDC only (Replicate data changes only) tasks, you can also create additional tables, including the following:
+ **Replication Status (dmslogs.awsdms\$1status)** – This table provides details about the current task. These include task status, amount of memory consumed by the task, and the number of changes not yet applied to the target. This table also gives the position in the source database where AWS DMS is currently reading. Also, it indicates if the task is in the full load phase or change data capture (CDC).
+ **Suspended Tables (dmslogs.awsdms\$1suspended\$1tables)** – This table provides a list of suspended tables as well as the reason they were suspended.
+ **Replication History (dmslogs.awsdms\$1history)** – This table provides information about replication history. This information includes the number and volume of records processed during the task, latency at the end of a CDC task, and other statistics.

The Apply Exceptions table (`dmslogs.awsdms_apply_exceptions`) contains the following parameters.


| Column | Type | Description | 
| --- | --- | --- | 
|  TASK\$1NAME  |  nvchar  |  The Resource ID of the AWS DMS task. Resource ID can be found in task ARN.  | 
|  TABLE\$1OWNER  |  nvchar  |  The table owner.  | 
|  TABLE\$1NAME  |  nvchar  |  The table name.  | 
|  ERROR\$1TIME  |  timestamp  |  The time the exception (error) occurred.  | 
|  STATEMENT  |  nvchar  |  The statement that was being run when the error occurred.  | 
|  ERROR  |  nvchar  |  The error name and description.  | 

The Replication Status table (`dmslogs.awsdms_status`) contains the current status of the task and the target database. It has the following settings.


| Column | Type | Description | 
| --- | --- | --- | 
|  SERVER\$1NAME  |  nvchar  |  The name of the machine where the replication task is running.  | 
|  TASK\$1NAME  |  nvchar  |  The Resource ID of the AWS DMS task. Resource ID can be found in task ARN.  | 
|  TASK\$1STATUS  |  varchar  |  One of the following values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ControlTable.html) Task status is set to FULL LOAD as long as there is at least one table in full load. After all tables have been loaded, the task status changes to CHANGE PROCESSING if CDC is enabled. The task is set to NOT RUNNING before you start the task, or after the task completes.  | 
| STATUS\$1TIME |  timestamp  |  The timestamp of the task status.  | 
|  PENDING\$1CHANGES  |  int  |  The number of change records that were committed in the source database and cached in the memory and disk of your replication instance.  | 
|  DISK\$1SWAP\$1SIZE  |  int  |  The amount of disk space used by old or offloaded transactions.  | 
| TASK\$1MEMORY |  int  |  Current memory used, in MB.  | 
|  SOURCE\$1CURRENT \$1POSITION  |  varchar  |  The position in the source database that AWS DMS is currently reading from.  | 
|  SOURCE\$1CURRENT \$1TIMESTAMP  |  timestamp  |  The timestamp in the source database that AWS DMS is currently reading from.  | 
|  SOURCE\$1TAIL \$1POSITION  |  varchar  |  The position of the oldest start transaction that isn't committed. This value is the newest position that you can revert to without losing any changes.  | 
|  SOURCE\$1TAIL \$1TIMESTAMP  |  timestamp  |  The timestamp of the oldest start transaction that isn't committed. This value is the newest timestamp that you can revert to without losing any changes.  | 
|  SOURCE\$1TIMESTAMP \$1APPLIED  |  timestamp  |  The timestamp of the last transaction commit. In a bulk apply process, this value is the timestamp for the commit of the last transaction in the batch.  | 

The Suspended table (`dmslogs.awsdms_suspended_tables`) contains the following parameters.


| Column | Type | Description | 
| --- | --- | --- | 
|  SERVER\$1NAME  |  nvchar  |  The name of the machine where the replication task is running.  | 
|  TASK\$1NAME  |  nvchar  |  The name of the AWS DMS task  | 
|  TABLE\$1OWNER  |  nvchar  |  The table owner.  | 
|  TABLE\$1NAME  |  nvchar  |  The table name.  | 
|  SUSPEND\$1REASON  |  nvchar  |  Reason for suspension.  | 
|  SUSPEND\$1TIMESTAMP  |  timestamp  |  The time the suspension occurred.  | 

The Replication History table (`dmslogs.awsdms_history`) contains the following parameters.


| Column | Type | Description | 
| --- | --- | --- | 
|  SERVER\$1NAME  |  nvchar  |  The name of the machine where the replication task is running.  | 
|  TASK\$1NAME  |  nvchar  |  The Resource ID of the AWS DMS task. Resource ID can be found in task ARN.  | 
|  TIMESLOT\$1TYPE  |  varchar  |  One of the following values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ControlTable.html) If the task is running both full load and CDC, two history records are written to the time slot.  | 
| TIMESLOT |  timestamp  |  The ending timestamp of the time slot.  | 
|  TIMESLOT\$1DURATION  |  int  |  The duration of the time slot, in minutes.  | 
|  TIMESLOT\$1LATENCY  |  int  |  The target latency at the end of the time slot, in seconds. This value only applies to CDC time slots.  | 
| RECORDS |  int  |  The number of records processed during the time slot.  | 
|  TIMESLOT\$1VOLUME  |  int  |  The volume of data processed in MB.  | 

The Validation Failure table (`awsdms_validation_failures_v1`) contains all the data validation failures for a task. For more information see, [Data Validation Troubleshooting](CHAP_Validating.md#CHAP_Validating.Troubleshooting).

Additional control table settings include the following:
+ `HistoryTimeslotInMinutes` – Use this option to indicate the length of each time slot in the Replication History table. The default is 5 minutes.
+ `ControlSchema` – Use this option to indicate the database schema name for the control tables for the AWS DMS target. If you don't enter any information for this option, then the tables are copied to the default location in the database as listed following: 
  + PostgreSQL, Public
  + Oracle, the target schema
  + Microsoft SQL Server, dbo in the target database
  + MySQL, awsdms\$1control
  + MariaDB, awsdms\$1control
  + Amazon Redshift, Public
  + DynamoDB, created as individual tables in the database
  + IBM Db2 LUW, awsdms\$1control

# Stream buffer task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.StreamBuffer"></a>

You can set stream buffer settings using the AWS CLI, including the following. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 
+ `StreamBufferCount` – Use this option to specify the number of data stream buffers for the migration task. The default stream buffer number is 3. Increasing the value of this setting might increase the speed of data extraction. However, this performance increase is highly dependent on the migration environment, including the source system and instance class of the replication server. The default is sufficient for most situations.
+ `StreamBufferSizeInMB` – Use this option to indicate the maximum size of each data stream buffer. The default size is 8 MB. You might need to increase the value for this option when you work with very large LOBs. You also might need to increase the value if you receive a message in the log files that the stream buffer size is insufficient. When calculating the size of this option, you can use the following equation:` [Max LOB size (or LOB chunk size)]*[number of LOB columns]*[number of stream buffers]*[number of tables loading in parallel per task(MaxFullLoadSubTasks)]*3`
+ `CtrlStreamBufferSizeInMB` – Use this option to set the size of the control stream buffer. The value is in megabytes, and can be 1–8. The default value is 5. You might need to increase this when working with a very large number of tables, such as tens of thousands of tables.

# Change processing tuning settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning"></a>

The following settings determine how AWS DMS handles changes for target tables during change data capture (CDC). Several of these settings depend on the value of the target metadata parameter `BatchApplyEnabled`. For more information on the `BatchApplyEnabled` parameter, see [Target metadata task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md). For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

Change processing tuning settings include the following:

The following settings apply only when the target metadata parameter `BatchApplyEnabled` is set to `true`.
+ `BatchApplyPreserveTransaction` – If set to `true`, transactional integrity is preserved and a batch is guaranteed to contain all the changes within a transaction from the source. The default value is `true`. This setting applies only to Oracle target endpoints.

  If set to `false`, there can be temporary lapses in transactional integrity to improve performance. There is no guarantee that all the changes within a transaction from the source are applied to the target in a single batch. 

  By default, AWS DMS processes changes in a transactional mode, which preserves transactional integrity. If you can afford temporary lapses in transactional integrity, you can use the batch optimized apply option instead. This option efficiently groups transactions and applies them in batches for efficiency purposes. Using the batch optimized apply option almost always violates referential integrity constraints. So we recommend that you turn these constraints off during the migration process and turn them on again as part of the cutover process. 
+ `BatchApplyTimeoutMin` – Sets the minimum amount of time in seconds that AWS DMS waits between each application of batch changes. The default value is 1.
+ `BatchApplyTimeoutMax` – Sets the maximum amount of time in seconds that AWS DMS waits between each application of batch changes before timing out. The default value is 30.
+ `BatchApplyMemoryLimit` – Sets the maximum amount of memory in (MB) to use for pre-processing in **Batch optimized apply mode**. The default value is 500.
+ `BatchSplitSize` – Sets the maximum number of changes applied in a single batch. The default value 0, meaning there is no limit applied.

The following settings apply only when the target metadata parameter `BatchApplyEnabled` is set to `false`.
+ `MinTransactionSize` – Sets the minimum number of changes to include in each transaction. The default value is 1000.
+ `CommitTimeout` – Sets the maximum time in seconds for AWS DMS to collect transactions in batches before declaring a timeout. The default value is 1.

For bidirectional replication, the following setting applies only when the target metadata parameter `BatchApplyEnabled` is set to `false`.
+ `LoopbackPreventionSettings` – These settings provide loopback prevention for each ongoing replication task in any pair of tasks involved in bidirectional replication. *Loopback prevention* prevents identical changes from being applied in both directions of the bidirectional replication, which can corrupt data. For more information about bidirectional replication, see [Performing bidirectional replication](CHAP_Task.CDC.md#CHAP_Task.CDC.Bidirectional).

AWS DMS attempts to keep transaction data in memory until the transaction is fully committed to the source, the target, or both. However, transactions that are larger than the allocated memory or that aren't committed within the specified time limit are written to disk.

The following settings apply to change processing tuning regardless of the change processing mode.
+ `MemoryLimitTotal` – Sets the maximum size (in MB) that all transactions can occupy in memory before being written to disk. The default value is 1024.
+ `MemoryKeepTime` – Sets the maximum time in seconds that each transaction can stay in memory before being written to disk. The duration is calculated from the time that AWS DMS started capturing the transaction. The default value is 60. 
+ `StatementCacheSize` – Sets the maximum number of prepared statements to store on the server for later execution when applying changes to the target. The default value is 50, and the maximum value is 200.
+ `RecoveryTimeout`– When resuming a task in CDC mode, RecoveryTimeout controls how long (in minutes) the task will wait to encounter the resume checkpoint. If the checkpoint is not encountered within the configured timeframe, the task will fail. The default behavior is to wait indefinitely for the checkpoint event.

Example of how task settings that handle Change Processing Tuning appear in a task setting JSON file:

```
"ChangeProcessingTuning": {
        "BatchApplyPreserveTransaction": true,
        "BatchApplyTimeoutMin": 1,
        "BatchApplyTimeoutMax": 30,
        "BatchApplyMemoryLimit": 500,
        "BatchSplitSize": 0,
        "MinTransactionSize": 1000,
        "CommitTimeout": 1,
        "MemoryLimitTotal": 1024,
        "MemoryKeepTime": 60,
        "StatementCacheSize": 50
        "RecoveryTimeout": -1
}
```

To control the frequency of writes to an Amazon S3 target during a data replication task, you can configure the `cdcMaxBatchInterval` and `cdcMinFileSize` extra connection attributes. This can result in better performance when analyzing the data without any additional overhead operations. For more information, see [Endpoint settings when using Amazon S3 as a target for AWS DMS](CHAP_Target.S3.md#CHAP_Target.S3.Configuring).

# Data validation task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.DataValidation"></a>

You can ensure that your data was migrated accurately from the source to the target. If you enable validation for a task, AWS DMS begins comparing the source and target data immediately after a full load is performed for a table. For more information about task data validation, its requirements, the scope of its database support, and the metrics it reports, see [AWS DMS data validation](CHAP_Validating.md). For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

 The data validation settings and their values include the following:
+ `EnableValidation` – Enables data validation when set to true. Otherwise, validation is disabled for the task. The default value is false.
+ `ValidationMode` – controls how AWS DMS validates data in the target table against the source table. Starting with replication engine version 3.5.4, DMS automatically sets this to `GROUP_LEVEL` for supported migration paths, delivering enhanced validation performance and significantly faster processing for large datasets. This enhancement applies to migrations for the migration paths listed in [AWS DMS data resync](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.DataResync.html#CHAP_DataResync.limitations). For all other migration paths, the validation mode defaults to `ROW_LEVEL`. 
**Note**  
Irrespective of the setting, AWS DMS validates all rows configured via table validation.
+ `FailureMaxCount` – Specifies the maximum number of records that can fail validation before validation is suspended for the task. The default value is 10,000. If you want the validation to continue regardless of the number of records that fail validation, set this value higher than the number of records in the source.
+ `HandleCollationDiff` – When this option is set to `true`, the validation accounts for column collation differences between source and target databases. Otherwise, any such differences in column collation are ignored for validation. Column collations can dictate the order of rows, which is important for data validation. Setting `HandleCollationDiff` to true resolves those collation differences automatically and prevents false positives in data validation. The default value is `false`.
+ `RecordFailureDelayInMinutes` – Specifies the delay time in minutes before reporting any validation failure details.

  If overall DMS Task CDC latency is greater than the value of `RecordFailureDelayInMinutesthen` it takes the precedence, for example, if `RecordFailureDelayInMinutes` is 5, and CDC Latency is 7 mins then DMS waits 7 minutes to report the validation failure details.
+ `RecordFailureDelayLimitInMinutes` – Specifies the delay before reporting any validation failure details. AWS DMS uses the task latency to recognize actual delay for changes to make it to the target in order to prevent false positives. This setting overrides the actual delay and DMS task CDC Latency value and enables you to set a greater delay before reporting any validation metrics. The default value is 0.
+ `RecordSuspendDelayInMinutes` – Specifies the delay time in minutes before tables are suspended from validation due to error threshold set in `FailureMaxCount`.
+ `SkipLobColumns` – When this option is set to `true`, AWS DMS skips data validation for all the LOB columns in the table's part of the task validation. The default value is `false`.
+ `TableFailureMaxCount` – Specifies the maximum number of rows in one table that can fail validation before validation is suspended for the table. The default value is 1,000. 
+ `ThreadCount` – Specifies the number of execution threads that AWS DMS uses during validation. Each thread selects not-yet-validated data from the source and target to compare and validate. The default value is 5. If you set `ThreadCount` to a higher number, AWS DMS can complete the validation faster. However, AWS DMS then runs more simultaneous queries, consuming more resources on the source and the target.
+ `ValidationOnly` – When this option is set to `true`, the task performs data validation without performing any migration or replication of data. The default value is `false`. You can't modify the `ValidationOnly` setting after the task is created.

  You must set **TargetTablePrepMode** to `DO_NOTHING` (the default for a validation only task) and set **Migration Type** to one of the following:
  + Full Load — Set the task **Migration type** to **Migrate existing data** in the AWS DMS console. Or, in the AWS DMS API set the migration type to FULL-LOAD.
  + CDC — Set the task **Migration type** to** Replicate data changes only** in the AWS DMS console. Or, in the AWS DMS API set the migration type to CDC.

  Regardless of the migration type chosen, data isn't actually migrated or replicated during a validation only task.

  For more information, see [Validation only tasks](CHAP_Validating.md#CHAP_Validating.ValidationOnly).
**Important**  
The `ValidationOnly` setting is immutable. It can't be modified for a task after that task is created.
+ `ValidationPartialLobSize` – Specifies if you want to do partial validation for LOB columns instead of validating all of the data stored in the column. This is something you might find useful when you are migrating just part of the LOB data and not the whole LOB data set. The value is in KB units. The default value is 0, which means AWS DMS validates all the LOB column data. For example, `"ValidationPartialLobSize": 32` means that AWS DMS only validates the first 32KB of the column data in both the source and target.
+ `PartitionSize` – Specifies the batch size of records to read for comparison from both source and target. The default is 10,000.
+ `ValidationQueryCdcDelaySeconds` – The amount of time the first validation query is delayed on both source and target for each CDC update. This might help reduce resource contention when migration latency is high. A validation only task automatically sets this option to 180 seconds. The default is 0.

For example, the following JSON enables data validation with twice the default number of threads. It also accounts for differences in record order caused by column collation differences in PostgreSQL endpoints. Also, it provides a validation reporting delay to account for additional time to process any validation failures.

```
"ValidationSettings": {
     "EnableValidation": true,
     "ThreadCount": 10,
     "HandleCollationDiff": true,
     "RecordFailureDelayLimitInMinutes": 30
  }
```

**Note**  
For an Oracle endpoint, AWS DMS uses DBMS\$1CRYPTO to validate BLOBs. If your Oracle endpoint uses BLOBs, grant the `execute` permission for DBMS\$1CRYPTO to the user account that accesses the Oracle endpoint. To do this, run the following statement.  

```
grant execute on sys.dbms_crypto to dms_endpoint_user;
```

# Data resync settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.DataResyncSettings"></a>

The Data resync feature allows you to resync target database with your source database based on data validation report. For more information, see [AWS DMS data validation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html).

You can add additional parameters for `ResyncSettings` in the `ReplicationTaskSettings` endpoint that configures the resync process. For more information, see [Task settings example](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html#CHAP_Tasks.CustomizingTasks.TaskSettings.Example) in the [Specifying task settings for AWS Database Migration Service tasks](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html).

**Note**  
`ResyncSchedule` and `MaxResyncTime` parameters are required if the resync process is enabled and the task has a CDC component. They are not valid for full-load only tasks.

The Data resync parameter settings and values are as follows:

`EnableResync`  
Enables Data resync feature when set to `true`. By default, Data resync is disabled.  
**Datatype**: Boolean  
**Required**: No  
**Default**: `false`  
**Validation**: Should not be null if `ResyncSettings` parameter is present in `TaskSettings`.

`ResyncSchedule`  
Time window for the Data resync feature to be in effect. Must be present in Cron format. For more information, see [Cron expression rules](CHAP_Validating.DataResync.md#CHAP_DataResync.cron).  
**Datatype**: String  
**Required**: No  
**Validation**:   
+ Must be present in Cron expression format.
+ Should not be null for tasks with CDC that has `EnableResync` set to `true`.
+ Cannot be set for tasks without CDC component.

`MaxResyncTime`  
Maximum time limit in minutes for the Data resync feature to be in effect.  
**Datatype**: Integer  
**Required**: No  
**Validation**:   
+ Should not be null for tasks with CDC.
+ Not required for tasks without CDC.
+ Minimum value: `5 minutes`, Maximum value: `14400 minutes` (10 days).

`Validation onlyTaskID`  
Unique ID of the validation task. The validation only task ID is appended at the end of an ARN. For example:  
+ Validation only task ARN: `arn:aws:dms:us-west-2:123456789012:task:6DG4CLGJ5JSJR67CFD7UDXFY7KV6CYGRICL6KWI`
+ Validation only task ID: `6DG4CLGJ5JSJR67CFD7UDXFY7KV6CYGRICL6KWI`
**Datatype**: String  
**Required**: No  
**Validation**: Should not be null for tasks with Data resync feature enabled and validation disabled.  
Example:  

```
"ResyncSettings": {
    "EnableResync": true,
    "ResyncSchedule": "30 9 ? * MON-FRI", 
    "MaxResyncTime": 400,  
    "ValidationTaskId": "JXPP94804DJOEWIJD9348R3049"
},
```

# Task settings for change processing DDL handling
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.DDLHandling"></a>



The following settings determine how AWS DMS handles data definition language (DDL) changes for target tables during change data capture (CDC). For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

Task settings to handle change processing DDL include the following:
+ `HandleSourceTableDropped –` Set this option to `true` to drop the target table when the source table is dropped.
+ `HandleSourceTableTruncated` – Set this option to `true` to truncate the target table when the source table is truncated.
+ `HandleSourceTableAltered` – Set this option to `true` to alter the target table when the source table is altered.

Following is an example of how task settings that handle change processing DDL appear in a task setting JSON file:

```
                "ChangeProcessingDdlHandlingPolicy": {
                   "HandleSourceTableDropped": true,
                   "HandleSourceTableTruncated": true,
                   "HandleSourceTableAltered": true
                },
```

**Note**  
For information about which DDL statements are supported for a specific endpoint, see the topic describing that endpoint.

# Character substitution task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.CharacterSubstitution"></a>

You can specify that your replication task perform character substitutions on the target database for all source database columns with the AWS DMS `STRING` or `WSTRING` data type. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example). 

You can configure character substitution for any task with endpoints from the following source and target databases:
+ Source databases:
  + Oracle
  + Microsoft SQL Server
  + MySQL
  + MariaDB
  + PostgreSQL
  + SAP Adaptive Server Enterprise (ASE)
  + IBM Db2 LUW
+ Target databases:
  + Oracle
  + Microsoft SQL Server
  + MySQL
  + MariaDB
  + PostgreSQL
  + SAP Adaptive Server Enterprise (ASE)
  + Amazon Redshift

You can specify character substitutions using the `CharacterSetSettings` parameter in your task settings. These character substitutions occur for characters specified using the Unicode code point value in hexadecimal notation. You can implement the substitutions in two phases, in the following order if both are specified:

1. **Individual character replacement** – AWS DMS can replace the values of selected characters on the source with specified replacement values of corresponding characters on the target. Use the `CharacterReplacements` array in `CharacterSetSettings` to select all source characters having the Unicode code points you specify. Use this array also to specify the replacement code points for the corresponding characters on the target. 

   To select all characters on the source that have a given code point, set an instance of `SourceCharacterCodePoint` in the `CharacterReplacements` array to that code point. Then specify the replacement code point for all equivalent target characters by setting the corresponding instance of `TargetCharacterCodePoint` in this array. To delete target characters instead of replacing them, set the appropriate instances of `TargetCharacterCodePoint` to zero (0). You can replace or delete as many different values of target characters as you want by specifying additional pairs of `SourceCharacterCodePoint` and `TargetCharacterCodePoint` settings in the `CharacterReplacements` array. If you specify the same value for multiple instances of `SourceCharacterCodePoint`, the value of the last corresponding setting of `TargetCharacterCodePoint` applies on the target.

   For example, suppose that you specify the following values for `CharacterReplacements`.

   ```
   "CharacterSetSettings": {
       "CharacterReplacements": [ {
           "SourceCharacterCodePoint": 62,
           "TargetCharacterCodePoint": 61
           }, {
           "SourceCharacterCodePoint": 42,
           "TargetCharacterCodePoint": 41
           }
       ]
   }
   ```

   In this example, AWS DMS replaces all characters with the source code point hex value 62 on the target by characters with the code point value 61. Also, AWS DMS replaces all characters with the source code point 42 on the target by characters with the code point value 41. In other words, AWS DMS replaces all instances of the letter `'b'`on the target by the letter `'a'`. Similarly, AWS DMS replaces all instances of the letter `'B'` on the target by the letter `'A'`.

1. **Character set validation and replacement** – After any individual character replacements complete, AWS DMS can make sure that all target characters have valid Unicode code points in the single character set that you specify. You use `CharacterSetSupport` in `CharacterSetSettings` to configure this target character verification and modification. To specify the verification character set, set `CharacterSet` in `CharacterSetSupport` to the character set's string value. (The possible values for `CharacterSet` follow.) You can have AWS DMS modify the invalid target characters in one of the following ways:
   + Specify a single replacement Unicode code point for all invalid target characters, regardless of their current code point. To configure this replacement code point, set `ReplaceWithCharacterCodePoint` in `CharacterSetSupport` to the specified value.
   + Configure the deletion of all invalid target characters by setting `ReplaceWithCharacterCodePoint` to zero (0).

   For example, suppose that you specify the following values for `CharacterSetSupport`.

   ```
   "CharacterSetSettings": {
       "CharacterSetSupport": {
           "CharacterSet": "UTF16_PlatformEndian",
           "ReplaceWithCharacterCodePoint": 0
       }
   }
   ```

   In this example, AWS DMS deletes any characters found on the target that are invalid in the `"UTF16_PlatformEndian"` character set. So, any characters specified with the hex value `2FB6` are deleted. This value is invalid because this is a 4-byte Unicode code point and UTF16 character sets accept only characters with 2-byte code points.

**Note**  
The replication task completes all of the specified character substitutions before starting any global or table-level transformations that you specify through table mapping. For more information about table mapping, see [Using table mapping to specify task settings](CHAP_Tasks.CustomizingTasks.TableMapping.md).  
Character substitution doesn't support LOB data types. This includes any datatype that DMS considers to be a LOB data type. For example, the `Extended` datatype in Oracle is considered to be a LOB. For more information about source datatypes, see [Source data types for Oracle](CHAP_Source.Oracle.md#CHAP_Source.Oracle.DataTypes) following. 

The values that AWS DMS supports for `CharacterSet` appear in the table following.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.CharacterSubstitution.html)

# Before image task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.BeforeImage"></a>

When writing CDC updates to a data-streaming target like Kinesis or Apache Kafka, you can view a source database row's original values before change by an update. To make this possible, AWS DMS populates a *before image* of update events based on data supplied by the source database engine. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).

To do so, you use the `BeforeImageSettings` parameter, which adds a new JSON attribute to every update operation with values collected from the source database system. 

Make sure to apply `BeforeImageSettings` only to full load plus CDC tasks or CDC only tasks. Full load plus CDC tasks migrate existing data and replicate ongoing changes. CDC only tasks replicate data changes only. 

Don't apply `BeforeImageSettings` to tasks that are full load only.

Possible options for `BeforeImageSettings` are the following:
+ `EnableBeforeImage` – Turns on before imaging when set to `true`. The default is `false`. 
+ `FieldName` – Assigns a name to the new JSON attribute. When `EnableBeforeImage` is `true`, `FieldName` is required and can't be empty.
+ `ColumnFilter` – Specifies a column to add by using before imaging. To add only columns that are part of the table's primary keys, use the default value, `pk-only`. To add any column that has a before image value, use `all`. Note that the before image doesn't support large binary object (LOB) data types such as CLOB and BLOB.

The following shows an example of the use of `BeforeImageSettings`. 

```
"BeforeImageSettings": {
    "EnableBeforeImage": true,
    "FieldName": "before-image",
    "ColumnFilter": "pk-only"
  }
```

For information on before image settings for Kinesis, including additional table mapping settings, see [Using a before image to view original values of CDC rows for a Kinesis data stream as a target](CHAP_Target.Kinesis.md#CHAP_Target.Kinesis.BeforeImage).

For information on before image settings for Kafka, including additional table mapping settings, see [Using a before image to view original values of CDC rows for Apache Kafka as a target](CHAP_Target.Kafka.md#CHAP_Target.Kafka.BeforeImage).

# Error handling task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.ErrorHandling"></a>

You can set the error handling behavior of your replication task using the following settings. For information about how to use a task configuration file to set task settings, see [Task settings example](CHAP_Tasks.CustomizingTasks.TaskSettings.md#CHAP_Tasks.CustomizingTasks.TaskSettings.Example).
+ `DataErrorPolicy` – Determines the action AWS DMS takes when there is an error related to data processing at the record level. Some examples of data processing errors include conversion errors, errors in transformation, and bad data. The default is `LOG_ERROR`.
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `DataErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `DataTruncationErrorPolicy` – Determines the action AWS DMS takes when data is truncated. The default is `LOG_ERROR`.
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `DataErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `DataErrorEscalationPolicy` – Determines the action AWS DMS takes when the maximum number of errors (set in the `DataErrorEscalationCount` parameter) is reached. The default is `SUSPEND_TABLE`.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `DataErrorEscalationCount` – Sets the maximum number of errors that can occur to the data for a specific record. When this number is reached, the data for the table that contains the error record is handled according to the policy set in the `DataErrorEscalationPolicy`. The default is 0. 
+ `EventErrorPolicy` – Determines the action AWS DMS takes when an error occurs while sending a task-related event. Its possible values are
  + `IGNORE` – The task continues and any data associated with that event is ignored.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `TableErrorPolicy` – Determines the action AWS DMS takes when an error occurs when processing data or metadata for a specific table. This error only applies to general table data and isn't an error that relates to a specific record. The default is `SUSPEND_TABLE`.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `TableErrorEscalationPolicy` – Determines the action AWS DMS takes when the maximum number of errors (set using the `TableErrorEscalationCount` parameter). The default and only user setting is `STOP_TASK`, where the task is stopped and manual intervention is required.
+ `TableErrorEscalationCount` – The maximum number of errors that can occur to the general data or metadata for a specific table. When this number is reached, the data for the table is handled according to the policy set in the `TableErrorEscalationPolicy`. The default is 0. 
+ `RecoverableErrorCount` – The maximum number of attempts made to restart a task when an environmental error occurs. After the system attempts to restart the task the designated number of times, the task is stopped and manual intervention is required. The default value is -1.

  When you set this value to -1, the number of retries that DMS attempts varies based on the returned error type as follows:
  + **Running state, recoverable error**: If a recoverable error such as a lost connection or a target apply fail occurs, DMS retries the task nine times.
  + **Starting state, recoverable error**: DMS retries the task six times.
  + **Running state, fatal error handled by DMS**: DMS retries the task six times.
  + **Running state, fatal error not handled by DMS**: DMS does not retry the task.
  + **Other than above**: AWS DMS retries the task indefinitely.

  Set this value to 0 to never attempt to restart a task. 

  We recommend that you set `RecoverableErrorCount` and `RecoverableErrorInterval` to values such that there are sufficient retries at sufficient intervals for your DMS task to recover properly. If a fatal error occurs, DMS stops making restart attempts in most scenarios.
+ `RecoverableErrorInterval` – The number of seconds that AWS DMS waits between attempts to restart a task. The default is 5. 
+ `RecoverableErrorThrottling` – When enabled, the interval between attempts to restart a task is increased in a series based on the value of `RecoverableErrorInterval`. For example, if `RecoverableErrorInterval` is set to 5 seconds, then the next retry will happen after 10 seconds, then 20, then 40 seconds and so on. The default is `true`. 
+ `RecoverableErrorThrottlingMax` – The maximum number of seconds that AWS DMS waits between attempts to restart a task if `RecoverableErrorThrottling` is enabled. The default is 1800. 
+ `RecoverableErrorStopRetryAfterThrottlingMax`– Default value is set to `true`, and DMS stops resuming the task after the maximum number of seconds that AWS DMS waits between recovery attempts is reached, per `RecoverableErrorStopRetryAfterThrottlingMax`. When set to `false`, DMS keeps resuming the task after the maximum number of seconds that AWS DMS waits between recovery attempts is reached, per `RecoverableErrorStopRetryAfterThrottlingMax` until `RecoverableErrorCount` is reached.
+ `ApplyErrorDeletePolicy` – Determines what action AWS DMS takes when there is a conflict with a DELETE operation. The default is `IGNORE_RECORD`. Possible values are the following:
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `ApplyErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `ApplyErrorInsertPolicy` – Determines what action AWS DMS takes when there is a conflict with an INSERT operation. The default is `LOG_ERROR`. Possible values are the following:
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `ApplyErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
  + `INSERT_RECORD` – If there is an existing target record with the same primary key as the inserted source record, the target record is updated.
**Note**  
**In Transactional Apply mode**: In this process, the system first attempts to insert the record. If the insert fails due to a primary key conflict, it deletes the existing record and then inserts the new one. 
**In Batch Apply mode**: The process removes all existing records in the target batch before inserting the complete set of new records, ensuring a clean replacement of data.
This process prevents data duplication, but incurs some performance cost compared to the default policy. The exact performance impact depends on your specific workload characteristics.
+ `ApplyErrorUpdatePolicy` – Determines what action AWS DMS takes when there is a missing data conflict with an UPDATE operation. The default is `LOG_ERROR`. Possible values are the following:
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored. The error counter for the `ApplyErrorEscalationCount` property is incremented. Thus, if you set a limit on errors for a table, this error counts toward that limit. 
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
  + `UPDATE_RECORD` – If the target record is missing, the missing target record is inserted into the target table. AWS DMS completely disables LOB column support for the task. Selecting this option requires full supplemental logging to be enabled for all the source table columns when Oracle is the source database.
**Note**  
**In Transactional Apply mode**: In this process, the system first attempts to update the record. If the update fails due to a missing record on target, it run a delete for the failed record and then inserts the new one. This process requires full supplemental logging for Oracle source databases and DMS disables LOB column support for this task.
**In Batch Apply mode**: The process removes all existing records in the target batch before inserting the complete set of new records, ensuring a clean replacement of data.
+ `ApplyErrorEscalationPolicy` – Determines what action AWS DMS takes when the maximum number of errors (set using the `ApplyErrorEscalationCount` parameter) is reached. The default is LOG\$1ERROR:
  + `LOG_ERROR` – The task continues and the error is written to the task log.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.
  + `STOP_TASK` – The task stops and manual intervention is required.
+ `ApplyErrorEscalationCount` – This option sets the maximum number of APPLY conflicts that can occur for a specific table during a change process operation. When this number is reached, the table data is handled according to the policy set in the `ApplyErrorEscalationPolicy` parameter. The default is 0. 
+ `ApplyErrorFailOnTruncationDdl` – Set this option to `true` to cause the task to fail when a truncation is performed on any of the tracked tables during CDC. The default is `false`. 

  This approach doesn't work with PostgreSQL version 11.x or lower, or any other source endpoint that doesn't replicate DDL table truncation.
+ `FailOnNoTablesCaptured` – Set this option to `true` to cause a task to fail when the table mappings defined for a task find no tables when the task starts. The default is `true`.
+ `FailOnTransactionConsistencyBreached` – This option applies to tasks using Oracle as a source with CDC. The default is false. Set it to `true` to cause a task to fail when a transaction is open for more time than the specified timeout and can be dropped. 

  When a CDC task starts with Oracle, AWS DMS waits for a limited time for the oldest open transaction to close before starting CDC. If the oldest open transaction doesn't close until the timeout is reached, then in most cases AWS DMS starts CDC, ignoring that transaction. If this option is set to `true`, the task fails.
+ `FullLoadIgnoreConflicts` – Set this option to `true` to have AWS DMS ignore "zero rows affected" and "duplicates" errors when applying cached events. If set to `false`, AWS DMS reports all errors instead of ignoring them. The default is `true`. 
+ `DataMaskingErrorPolicy` – Determines the action AWS DMS takes when the data masking is failed due to incompatible data type or any other reason. The follwing are available options:
  + `STOP_TASK` (Default) – The task stops and manual intervention is required.
  + `IGNORE_RECORD` – The task continues and the data for that record is ignored.
  + `LOG_ERROR` – The task continues and the error is written to the task log. Unmasked data will be loaded in target table.
  + `SUSPEND_TABLE` – The task continues but data from the table with the error record is moved into an error state and the data isn't replicated.

**Note**  
 Table load errors in Redshift as a target are reported in `STL_LOAD_ERRORS`. For more information, see [STL\$1LOAD\$1ERRORS](https://docs.aws.amazon.com/redshift/latest/dg/r_STL_LOAD_ERRORS.html) in the *Amazon Redshift Database Developer Guide*.

**Note**  
Parameter changes related to recoverable errors only take effect after you stop and resume the DMS task. Changes do not apply to the current execution.

# Saving task settings
<a name="CHAP_Tasks.CustomizingTasks.TaskSettings.Saving"></a>

You can save task settings as a JSON file in case you want to reuse the settings for another task. You can find tasks settings to copy to a JSON file under the **Overview details** section of a task.

**Note**  
While reusing task settings for other tasks, remove any `CloudWatchLogGroup` and `CloudWatchLogStream` attributes. Otherwise, the following error is given: SYSTEM ERROR MESSAGE:Task Settings CloudWatchLogGroup or CloudWatchLogStream cannot be set on create.

For example, the following JSON file contains settings saved for a task.

```
{
    "TargetMetadata": {
        "TargetSchema": "",
        "SupportLobs": true,
        "FullLobMode": false,
        "LobChunkSize": 0,
        "LimitedSizeLobMode": true,
        "LobMaxSize": 32,
        "InlineLobMaxSize": 0,
        "LoadMaxFileSize": 0,
        "ParallelLoadThreads": 0,
        "ParallelLoadBufferSize": 0,
        "BatchApplyEnabled": false,
        "TaskRecoveryTableEnabled": false,
        "ParallelLoadQueuesPerThread": 0,
        "ParallelApplyThreads": 0,
        "ParallelApplyBufferSize": 0,
        "ParallelApplyQueuesPerThread": 0
    },
    "FullLoadSettings": {
        "TargetTablePrepMode": "DO_NOTHING",
        "CreatePkAfterFullLoad": false,
        "StopTaskCachedChangesApplied": false,
        "StopTaskCachedChangesNotApplied": false,
        "MaxFullLoadSubTasks": 8,
        "TransactionConsistencyTimeout": 600,
        "CommitRate": 10000
    },
    "Logging": {
        "EnableLogging": true,
        "LogComponents": [
            {
                "Id": "TRANSFORMATION",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "SOURCE_UNLOAD",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "IO",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TARGET_LOAD",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "PERFORMANCE",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "SOURCE_CAPTURE",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "SORTER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "REST_SERVER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "VALIDATOR_EXT",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TARGET_APPLY",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TASK_MANAGER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "TABLES_MANAGER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "METADATA_MANAGER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "FILE_FACTORY",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "COMMON",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "ADDONS",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "DATA_STRUCTURE",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "COMMUNICATION",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            },
            {
                "Id": "FILE_TRANSFER",
                "Severity": "LOGGER_SEVERITY_DEFAULT"
            }
        ]
    },
    "ControlTablesSettings": {
        "ControlSchema": "",
        "HistoryTimeslotInMinutes": 5,
        "HistoryTableEnabled": false,
        "SuspendedTablesTableEnabled": false,
        "StatusTableEnabled": false,
        "FullLoadExceptionTableEnabled": false
    },
    "StreamBufferSettings": {
        "StreamBufferCount": 3,
        "StreamBufferSizeInMB": 8,
        "CtrlStreamBufferSizeInMB": 5
    },
    "ChangeProcessingDdlHandlingPolicy": {
        "HandleSourceTableDropped": true,
        "HandleSourceTableTruncated": true,
        "HandleSourceTableAltered": true
    },
    "ErrorBehavior": {
        "DataErrorPolicy": "LOG_ERROR",
        "DataTruncationErrorPolicy": "LOG_ERROR",
        "DataErrorEscalationPolicy": "SUSPEND_TABLE",
        "DataErrorEscalationCount": 0,
        "TableErrorPolicy": "SUSPEND_TABLE",
        "TableErrorEscalationPolicy": "STOP_TASK",
        "TableErrorEscalationCount": 0,
        "RecoverableErrorCount": -1,
        "RecoverableErrorInterval": 5,
        "RecoverableErrorThrottling": true,
        "RecoverableErrorThrottlingMax": 1800,
        "RecoverableErrorStopRetryAfterThrottlingMax": true,
        "ApplyErrorDeletePolicy": "IGNORE_RECORD",
        "ApplyErrorInsertPolicy": "LOG_ERROR",
        "ApplyErrorUpdatePolicy": "LOG_ERROR",
        "ApplyErrorEscalationPolicy": "LOG_ERROR",
        "ApplyErrorEscalationCount": 0,
        "ApplyErrorFailOnTruncationDdl": false,
        "FullLoadIgnoreConflicts": true,
        "FailOnTransactionConsistencyBreached": false,
        "FailOnNoTablesCaptured": true
    },
    "ChangeProcessingTuning": {
        "BatchApplyPreserveTransaction": true,
        "BatchApplyTimeoutMin": 1,
        "BatchApplyTimeoutMax": 30,
        "BatchApplyMemoryLimit": 500,
        "BatchSplitSize": 0,
        "MinTransactionSize": 1000,
        "CommitTimeout": 1,
        "MemoryLimitTotal": 1024,
        "MemoryKeepTime": 60,
        "StatementCacheSize": 50
    },
    "PostProcessingRules": null,
    "CharacterSetSettings": null,
    "LoopbackPreventionSettings": null,
    "BeforeImageSettings": null,
    "FailTaskWhenCleanTaskResourceFailed": false
}
```

# Setting LOB support for source databases in an AWS DMS task
<a name="CHAP_Tasks.LOBSupport"></a>

Large binary objects (LOBs) can sometimes be difficult to migrate between systems. AWS DMS offers a number of options to help with the tuning of LOB columns. To see which and when data types are considered LOBs by AWS DMS, see the AWS DMS documentation.

When you migrate data from one database to another, you might take the opportunity to rethink how your LOBs are stored, especially for heterogeneous migrations. If you want to do so, there's no need to migrate the LOB data.

If you decide to include LOBs, you can then decide the other LOB settings:
+ The LOB mode determines how LOBs are handled:
  + **Full LOB mode** – In full LOB mode AWS DMS migrates all LOBs from source to target regardless of size. In this configuration, AWS DMS has no information about the maximum size of LOBs to expect. Thus, LOBs are migrated one at a time, piece by piece. Full LOB mode can be quite slow.
  + **Limited LOB mode** – In limited LOB mode, you set a maximum LOB size for DMS to accept. That enables DMS to pre-allocate memory and load the LOB data in bulk. LOBs that exceed the maximum LOB size are truncated, and a warning is issued to the log file. In limited LOB mode, you can gain significant performance over full LOB mode. We recommend that you use limited LOB mode whenever possible. The maximum value for this parameter is 102400 KB (100 MB).
**Note**  
Using the Max LOB size (K) option with a value greater than 63KB impacts the performance of a full load configured to run in limited LOB mode. During a full load, DMS allocates memory by multiplying the Max LOB size (k) value by the Commit rate, and the product is multiplied by the number of LOB columns. When DMS cannot pre-allocate that memory, it consumes SWAP memory which negatively impacts performance of the full load tasks. If you experience performance issues when using limited LOB mode, consider decreasing the commit rate until you achieve an acceptable level of performance. During a CDC mode, DMS allocates memory by multiplying the number of LOB Columns by the Max LOB Size parameter specified in the Limited LOB task settings, and then by the record size. DMS CDC process is single threaded per DMS task. For more information, see [Change processing tuning settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ChangeProcessingTuning.html).  
To validate limited LOB size, you must set `ValidationPartialLobSize` to the same value as `LobMaxSize` (K).
  + **Inline LOB mode** – In inline LOB mode, you set the maximum LOB size that DMS transfers inline. LOBs smaller than the specified size are transferred inline. LOBs larger than the specified size are replicated using full LOB mode. You can select this option to replicate both small and large LOBs when most of the LOBs are small. DMS doesn’t support inline LOB mode for endpoints that don’t support Full LOB mode, like S3 and Redshift.
**Note**  
With Oracle, LOBs are treated as VARCHAR data types whenever possible. This approach means that AWS DMS fetches them from the database in bulk, which is significantly faster than other methods. The maximum size of a VARCHAR in Oracle is 32 K. Therefore, a limited LOB size of less than 32 K is optimal when Oracle is your source database.
+ When a task is configured to run in limited LOB mode, the **Max LOB size (K)** option sets the maximum size LOB that AWS DMS accepts. Any LOBs that are larger than this value is truncated to this value.
+ When a task is configured to use full LOB mode, AWS DMS retrieves LOBs in pieces. The **LOB chunk size (K)** option determines the size of each piece. When setting this option, pay particular attention to the maximum packet size allowed by your network configuration. If the LOB chunk size exceeds your maximum allowed packet size, you might see disconnect errors. The recommended value for `LobChunkSize` is 64 kilobytes. Increasing the value for `LobChunkSize` above 64 kilobytes can cause task failures.
+ When a task is configured to run in inline LOB mode, the `InlineLobMaxSize` setting determines which LOBs DMS transfers inline.
**Note**  
A primary key is mandatory for tables containing LOB columns during Change Data Capture (CDC) operations. DMS uses this key to look up LOB values in the source table. This requirement only applies to CDC tasks - full-load tasks can read and copy entire LOB columns directly from source to target without restrictions.

For information on the task settings to specify these options, see [Target metadata task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md)

## SQL Commands to check max LOB column length on source table
<a name="CHAP_Tasks.Creating.SQLCommandsLOB"></a>

Use the following SQL commands to check the Max LOB column length and accordingly configure the DMS limited LOB settings to avoid any data truncation in the migration:

**Oracle**  

```
SELECT dbms_lob.getlength(<COL_NAME>) as LOB_LENGTH
FROM <TABLE_NAME>
ORDER BY dbms_lob.getlength(<COL_NAME>) DESC
FETCH FIRST 10 ROWS ONLY;

Select ((max(length(<COL_NAME>)))/(1024)) from <TABLE_NAME>
```

**SQL Server**  

```
Select top 10 datalength(<COL_NAME>) as fieldsize from <TABLE_NAME> order by datalength(<COL_NAME>) desc;
```

**MySQL**  

```
Select (max(length(<COL_NAME>))/(1024)) as "Size in KB" from <TABLE_NAME>;
```

**PostgreSQL**  

```
Select max((octet_length(<COL_NAME>))/(1024.0)) as "Size in KB" from <TABLE_NAME>;
```

**Db2 LUW**  

```
-- Method 1: Using SYSCAT.COLUMNS (converting to KB)

SELECT TABSCHEMA, TABNAME, COLNAME, LENGTH/1024 as LENGTH_KB, TYPENAME FROM SYSCAT.COLUMNS WHERE TYPENAME IN ('BLOB', 'CLOB', 'DBCLOB') ORDER BY LENGTH DESC;

-- Method 2: For specific table with KB conversion

  SELECT COLNAME, LENGTH/1024 as LENGTH_KB, TYPENAME FROM SYSCAT.COLUMNS WHERE TABSCHEMA = 'YOUR_SCHEMA'AND TABNAME = 'YOUR_TABLE'AND TYPENAME IN ('BLOB', 'CLOB', 'DBCLOB'); 
  
-- Method 3: Using SYSIBM.SYSCOLUMNS

SELECT TBCREATOR, TBNAME, NAME, LENGTH/1024 as LENGTH_KB, COLTYPE FROM SYSIBM.SYSCOLUMNS WHERE COLTYPE IN ('BLOB', 'CLOB', 'DBCLOB') ORDER BY LENGTH DESC;

SYBASE :

SELECT c.name as column_name, t.name as data_type, (c.length)/1024 as length_KB FROM syscolumns c JOIN systypes t ON c.usertype = t.usertype WHERE object_name(c.id) = 'YOUR_TABLE_NAME'AND t.name IN ('text', 'image', 'unitext') ORDER BY c.length DESC;
```

# Creating multiple tasks
<a name="CHAP_Tasks.ReplicationTasks.MultipleTasks"></a>

In some migration scenarios, you might have to create several migration tasks. Tasks work independently and can run concurrently. Each task has its own initial load, CDC, and log reading process. Tables that are related through data manipulation language (DML) must be part of the same task.

Some reasons to create multiple tasks for a migration include the following:
+ The target tables for the tasks reside on different databases, such as when you are fanning out or breaking a system into multiple systems.
+ You want to break the migration of a large table into multiple tasks by using filtering.

**Note**  
Because each task has its own change capture and log reading process, changes are *not* coordinated across tasks. Therefore, when using multiple tasks to perform a migration, make sure that each individual source transaction is wholly contained within a single task. You can use multiple tasks to perform a migration if no individual transaction is split across different tasks.

# Creating tasks for ongoing replication using AWS DMS
<a name="CHAP_Task.CDC"></a>

You can create an AWS DMS task that captures ongoing changes from the source data store. You can do this capture while you are migrating your data. You can also create a task that captures ongoing changes after you complete your initial (full-load) migration to a supported target data store. This process is called ongoing replication or change data capture (CDC). AWS DMS uses this process when replicating ongoing changes from a source data store. This process works by collecting changes to the database logs using the database engine's native API. 

**Note**  
You can migrate views using full-load tasks only. If your task is either a CDC-only task or a full-load task that starts CDC after it completes, the migration includes only tables from the source. Using a full-load-only task, you can migrate views or a combination of tables and views. For more information, see [Specifying table selection and transformations rules using JSON](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.md).

Each source engine has specific configuration requirements for exposing this change stream to a given user account. Most engines require some additional configuration to make it possible for the capture process to consume the change data in a meaningful way, without data loss. For example, Oracle requires the addition of supplemental logging, and MySQL requires row-level binary logging (bin logging). 

 To read ongoing changes from the source database, AWS DMS uses engine-specific API actions to read changes from the source engine's transaction logs. Following are some examples of how AWS DMS does that: 
+ For Oracle, AWS DMS uses either the Oracle LogMiner API or binary reader API (bfile API) to read ongoing changes. AWS DMS reads ongoing changes from the online or archive redo logs based on the system change number (SCN). 
+ For Microsoft SQL Server, AWS DMS uses MS-Replication or MS-CDC to write information to the SQL Server transaction log. It then uses the `fn_dblog()` or ` fn_dump_dblog()` function in SQL Server to read the changes in the transaction log based on the log sequence number (LSN). 
+ For MySQL, AWS DMS reads changes from the row-based binary logs (binlogs) and migrates those changes to the target.
+ For PostgreSQL, AWS DMS sets up logical replication slots and uses the test\$1decoding plugin to read changes from the source and migrate them to the target.
+ For Amazon RDS as a source, we recommend ensuring that backups are enabled to set up CDC. We also recommend ensuring that the source database is configured to retain change logs for a sufficient time—24 hours is usually enough. For specific settings for each endpoint, see the following:
  + **Amazon RDS for Oracle: ** [Configuring an AWS-managed Oracle source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Amazon-Managed.Configuration).
  + **Amazon RDS for MySQL and Aurora MySQL:** [Using an AWS-managed MySQL-compatible database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.AmazonManaged).
  + **Amazon RDS for SQL Server:** [Setting up ongoing replication on a cloud SQL Server DB instance](CHAP_Source.SQLServer.CDC.md#CHAP_Source.SQLServer.Configuration).
  + **Amazon RDS for PostgreSQL and Aurora PostgreSQL:** PostgreSQL automatically keeps the required WAL. 

There are two types of ongoing replication tasks:
+ Full load plus CDC – The task migrates existing data and then updates the target database based on changes to the source database.
+ CDC only – The task migrates ongoing changes after you have data on your target database.

## Performing replication starting from a CDC start point
<a name="CHAP_Task.CDC.StartPoint"></a>

You can start an AWS DMS ongoing replication task (change data capture only) from several points. These include the following: 
+  **From a custom CDC start time** – You can use the AWS Management Console or AWS CLI to provide AWS DMS with a timestamp where you want the replication to start. AWS DMS then starts an ongoing replication task from this custom CDC start time. AWS DMS converts the given timestamp (in UTC) to a native start point, such as an LSN for SQL Server or an SCN for Oracle. AWS DMS uses engine-specific methods to determine where to start the migration task based on the source engine's change stream. 
**Note**  
Only by setting the `StartFromContext` extra connection attribute to the required timestamp does Db2 as a source offer a customized CDC start time.  
PostgreSQL as a source doesn't support a custom CDC start time. This is because the PostgreSQL database engine doesn't have a way to map a timestamp to an LSN or SCN as Oracle and SQL Server do. 
+ **From a CDC native start point** – You can also start from a native point in the source engine's transaction log. In some cases, you might prefer this approach because a timestamp can indicate multiple native points in the transaction log. AWS DMS supports this feature for the following source endpoints: 
  + SQL Server
  + PostgreSQL
  + Oracle
  + MySQL
  + MariaDB
**Note**  
The following database endpoints do not support CDC native start point functionality:  
 Amazon Aurora MySQL 
 Amazon Aurora PostgreSQL 
Amazon DocumentDB (with MongoDB compatibility)
Amazon S3
IBM Db2 for z/OS
IBM Db2 LUW
Microsoft Azure SQL Database
Microsoft Azure SQL Managed Instance
MongoDB
SAP Sybase ASE

When the task is created, AWS DMS marks the CDC start point, and it can't be changed. To use a different CDC start point, create a new task.

**Note**  
 When specifying a CDC (Change Data Capture) start point, the premigration assessment will still perform a complete analysis of existing metadata and parameters in the current environment. This ensures that all current configurations and settings are evaluated, regardless of the designated CDC starting point.   
 Important: If no assessments have been performed within the last 7 days for a given task, the premigration assessment will automatically execute in both resume and reload modes to ensure data completeness and consistency.   
 Metadata analysis remains comprehensive. 
 Parameter evaluation covers current state. 
 CDC start point does not limit assessment scope. 
 Full system configuration review is maintained. 
 Auto-execution in resume and reload modes if:   
 Last assessment > 7 days ago.   
 No previous assessment records found. 
 This helps ensure accurate migration planning while maintaining data consistency between the specified CDC point and current system state. 

### Determining a CDC native start point
<a name="CHAP_Task.CDC.StartPoint.Native"></a>

A *CDC native start point* is a point in the database engine's log that defines a time where you can begin CDC. As an example, suppose that a bulk data dump has already been applied to the target. You can look up the native start point for the ongoing replication-only task. To avoid any data inconsistencies, carefully choose the start point for the replication-only task. DMS captures transactions that started after the chosen CDC start point.

Following are examples of how you can find the CDC native start point from supported source engines:

**SQL Server**  
In SQL Server, a log sequence number (LSN) has three parts:  
+ Virtual log file (VLF) sequence number
+ Starting offset of a log block
+ Slot number
 An example LSN is as follows: `00000014:00000061:0001`   
To get the start point for a SQL Server migration task based on your transaction log backup settings, use the `fn_dblog()` or `fn_dump_dblog()` function in SQL Server.   
To use CDC native start point with SQL Server, create a publication on any table participating in ongoing replication. AWS DMS creates the publication automatically when you use CDC without using a CDC native start point.

**PostgreSQL**  
You can use a CDC recovery checkpoint for your PostgreSQL source database. This checkpoint value is generated at various points as an ongoing replication task runs for your source database (the parent task). For more information about checkpoints in general, see [Using a checkpoint as a CDC start point](#CHAP_Task.CDC.StartPoint.Checkpoint).   
To identify the checkpoint to use as your native start point, use your database `pg_replication_slots` view or your parent task's overview details from the AWS Management Console  

**To find the overview details for your parent task on the console**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/). 

   If you are signed in as an IAM user, make sure that you have the appropriate permissions to access AWS DMS. For more information about the permissions required, see [IAM permissions needed to use AWS DMS](security-iam.md#CHAP_Security.IAMPermissions).

1. On the navigation pane, choose **Database migration tasks**.

1. Choose your parent task from the list on the **Database migration tasks** page. Doing this opens your parent task page, showing the overview details.

1. Find the checkpoint value under **Change data capture (CDC)**, **Change data capture (CDC) start position**, and **Change data capture (CDC) recovery checkpoint**. 

   The value appears similar to the following.

   ```
   checkpoint:V1#1#000004AF/B00000D0#0#0#*#0#0
   ```

   Here, the `4AF/B00000D0` component is what you need to specify this native CDC start point. Set the DMS API `CdcStartPosition` parameter to this value when you create the CDC task to begin replication at this start point for your PostgreSQL source. For information on using the AWS CLI to create this CDC task, see [Enabling CDC with an AWS-managed PostgreSQL DB instance with AWS DMS](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.RDSPostgreSQL.CDC).

**Oracle**  
A system change number (SCN) is a logical, internal time stamp used by Oracle databases. SCNs order events that occur within the database, which is necessary to satisfy the ACID properties of a transaction. Oracle databases use SCNs to mark the location where all changes have been written to disk so that a recovery action doesn't apply already written changes. Oracle also uses SCNs to mark the point where no redo exists for a set of data so that recovery can stop.   
To get the current SCN in an Oracle database, run the following command.  

```
SELECT CURRENT_SCN FROM V$DATABASE
```
If you use the SCN or timestamp to start a CDC task, you miss the results of any open transactions and fail to migrate these results. *Open transactions* are transactions that were started before the start position of the task and committed after task start position. You can identify the SCN and timestamp to start a CDC task at a point that includes all open transactions. For more information, see [Transactions](https://docs.oracle.com/database/121/CNCPT/transact.htm#CNCPT016) in the Oracle online documentation. With version 3.5.1 and higher, AWS DMS supports open transactions for a CDC-only task using the `openTransactionWindow` endpoint setting if you use the SCN or Timestamp to start the task.   
 When using the `openTransactionWindow` setting, you must provide the window, in number of minutes, to handle the open transactions. AWS DMS shifts the capture position and finds the new position to start the data capture. AWS DMS uses the new start position for scanning any open transactions from the required Oracle redo or archived redo logs.

**MySQL**  
Before the release of MySQL version 5.6.3, the log sequence number (LSN) for MySQL was a 4-byte unsigned integer. In MySQL version 5.6.3, when the redo log file size limit increased from 4 GB to 512 GB, the LSN became an 8-byte unsigned integer. The increase reflects that additional bytes were required to store extra size information. Applications built on MySQL 5.6.3 or higher that use LSN values should use 64-bit rather than 32-bit variables to store and compare LSN values. For more information about MySQL LSNs, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/glossary.html#glos_lsn).   
 To get the current LSN in a MySQL database, run the following command.  

```
mysql> show master status;
```
 The query returns a binlog file name, the position, and several other values. The CDC native start point is a combination of the binlogs file name and the position, for example `mysql-bin-changelog.000024:373`. In this example, `mysql-bin-changelog.000024` is the binlogs file name and `373` is the position where AWS DMS needs to start capturing changes. 

### Using a checkpoint as a CDC start point
<a name="CHAP_Task.CDC.StartPoint.Checkpoint"></a>

 An ongoing replication task migrates changes, and AWS DMS caches checkpoint information specific to AWS DMS from time to time. The checkpoint that AWS DMS creates contains information so the replication engine knows the recovery point for the change stream. You can use the checkpoint to go back in the timeline of changes and recover a failed migration task. You can also use a checkpoint to start another ongoing replication task for another target at any given point in time.

You can get the checkpoint information in one of the following three ways: 
+ Run the API operation `DescribeReplicationTasks` and view the results. You can filter the information by task and search for the checkpoint. You can retrieve the latest checkpoint when the task is in stopped or failed state. This information is lost if the task is deleted.
+ View the metadata table named `awsdms_txn_state` on the target instance. You can query the table to get checkpoint information. To create the metadata table, set the `TaskRecoveryTableEnabled` parameter to `Yes` when you create a task. This setting causes AWS DMS to continuously write checkpoint information to the target metadata table. This information is lost if a task is deleted.

  For example, the following is a sample checkpoint in the metadata table: `checkpoint:V1#34#00000132/0F000E48#0#0#*#0#121`
+ From the navigation pane, choose** Database migration tasks**, and choose your parent task from the list that appears on the Database migration tasks page. Your parent task page opens, showing the overview details. Find the checkpoint value under Change data capture (CDC), Change data capture (CDC) start position, and Change data capture (CDC) recovery checkpoint. The checkpoint value appears similar to the following:

  `checkpoint:V1#1#000004AF/B00000D0#0#0#*#0#0`

### Stopping a task at a commit or server time point
<a name="CHAP_Task.CDC.StopPoint"></a>

 With the introduction of CDC native start points, AWS DMS can also stop a task at the following points: 
+ A commit time on the source
+ A server time on the replication instance

 You can modify a task and set a time in UTC to stop as needed. The task automatically stops based on the commit or server time that you set. Or, if you know an appropriate time to stop the migration task at task creation, you can set a stop time when you create the task. 

**Note**  
It can take up to 40 minutes to initialize all the resources the first time you start a new AWS DMS Serverless replication. Note that the `server_time` option is only applicable once the resource initialization has completed.

### Starting a task with target reload
<a name="CHAP_Task.CDC.Start.Task"></a>

 You can start an AWS DMS migrate existing data and replicate ongoing changes task with the reload option (“reload-target” in DMS API). In this case the migration will start from the beginning, reload each tables data and continue data replication using full-load and CDC-enabled tasks settings. 

 To use task start with reload option, the following conditions must apply: 
+  The task must be stopped. 
+  The migration method for the task must be either full load or full load with CDC. 

 DMS applies the TargetTablePrepMode setting before reloading the tables. If you set `TargetTablePrepMode` to `DO_NOTHING`, you must manually truncate the tables first. 

**Note**  
 When a DMS task is started with the target reload option, the premigration assessment will perform a complete analysis of existing metadata and parameters in the current environment. This ensures that all current configurations and settings are evaluated, regardless of the actual task status. 

**Important**  
 If no assessments have been performed within the last 7 days for a given task, the premigration assessment will automatically execute to ensure data completeness and consistency. 

## Performing bidirectional replication
<a name="CHAP_Task.CDC.Bidirectional"></a>

You can use AWS DMS tasks to perform bidirectional replication between two systems. In *bidirectional replication*, you replicate data from the same table (or set of tables) between two systems in both directions. 

For example, you can copy an EMPLOYEE table from database A to database B and replicate changes to the table from database A to database B. You can also replicate changes to the EMPLOYEE table from database B back to A. Thus, you're performing bidirectional replication. 

**Note**  
AWS DMS bidirectional replication isn't intended as a full multi-master solution including a primary node, conflict resolution, and so on.

Use bidirectional replication for situations where data on different nodes is operationally segregated. In other words, suppose that you have a data element changed by an application operating on node A, and that node A performs bidirectional replication with node B. That data element on node A is never changed by any application operating on node B.

AWS DMS supports bidirectional replication on these database engines: 
+ Oracle 
+ SQL Server 
+ MySQL 
+ PostgreSQL 
+ Amazon Aurora MySQL-Compatible Edition
+ Aurora PostgreSQL-Compatible Edition

### Creating bidirectional replication tasks
<a name="CHAP_Task.CDC.Bidirectional.Tasks"></a>

To enable AWS DMS bidirectional replication, configure source and target endpoints for both databases (A and B). For example, configure a source endpoint for database A, a source endpoint for database B, a target endpoint for database A, and a target endpoint for database B. 

Then create two tasks: one task for source A to move data to target B, and another task for source B to move data to target A. Also, make sure that each task is configured with loopback prevention. Doing this prevents identical changes from being applied to the targets of both tasks, thus corrupting the data for at least one of them. For more information, see [Preventing loopback](#CHAP_Task.CDC.Bidirectional.Loopback).

For the easiest approach, start with identical datasets on both database A and database B. Then create two CDC only tasks, one task to replicate data from A to B, and another task to replicate data from B to A.

To use AWS DMS to instantiate a new dataset (database) on node B from node A, do the following:

1. Use a full load and CDC task to move data from database A to B. Make sure that no applications are modifying data on database B during this time.

1. When the full load is complete and before applications are allowed to modify data on database B, note the time or CDC start position of database B. For instructions, see [Performing replication starting from a CDC start point](#CHAP_Task.CDC.StartPoint).

1. Create a CDC only task that moves data from database B back to A using this start time or CDC start position.

**Note**  
Only one task in a bidirectional pair can be full load and CDC.

### Preventing loopback
<a name="CHAP_Task.CDC.Bidirectional.Loopback"></a>

To show preventing loopback, suppose that in a task T1 AWS DMS reads change logs from source database A and applies the changes to target database B. 

Next, a second task, T2, reads change logs from source database B and applies them back to target database A. Before T2 does this, DMS must make sure that the same changes made to target database B from source database A aren't made to source database A. In other words, DMS must make sure that these changes aren't echoed (looped) back to target database A. Otherwise, the data in database A can be corrupted. 

To prevent loopback of changes, add the following task settings to each bidirectional replication task. Doing this makes sure that loopback data corruption doesn't occur in either direction.

```
{
. . .

  "LoopbackPreventionSettings": {
    "EnableLoopbackPrevention": Boolean,
    "SourceSchema": String,
    "TargetSchema": String
  },

. . .
}
```

The `LoopbackPreventionSettings` task settings determine if a transaction is new or an echo from the opposite replication task. When AWS DMS applies a transaction to a target database, it updates a DMS table (`awsdms_loopback_prevention`) with an indication of the change. Before applying each transaction to a target, DMS ignores any transaction that includes a reference to this `awsdms_loopback_prevention` table. Therefore, it doesn't apply the change. 

Include these task settings with each replication task in a bidirectional pair. These settings enable loopback prevention. They also specify the schema for each source and target database in the task that includes the `awsdms_loopback_prevention` table for each endpoint.

To enable each task to identify such an echo and discard it, set `EnableLoopbackPrevention` to `true`. To specify a schema at the source that includes `awsdms_loopback_prevention`, set `SourceSchema` to the name for that schema in the source database. To specify a schema at the target that includes the same table, set `TargetSchema` to the name for that schema in the target database.

In the example following, the `SourceSchema` and `TargetSchema` settings for a replication task T1 and its opposite replication task T2 are specified with opposite settings.

Settings for task T1 are as follows.

```
{
. . .

  "LoopbackPreventionSettings": {
    "EnableLoopbackPrevention": true,
    "SourceSchema": "LOOP-DATA",
    "TargetSchema": "loop-data"
  },

. . .
}
```

Settings for opposite task T2 are as follows.

```
{
. . .

  "LoopbackPreventionSettings": {
    "EnableLoopbackPrevention": true,
    "SourceSchema": "loop-data",
    "TargetSchema": "LOOP-DATA"
  },

. . .
}
```

**Note**  
When using the AWS CLI, use only the `create-replication-task` or `modify-replication-task` commands to configure `LoopbackPreventionSettings` in your bidirectional replications tasks. 

### Limitations of bidirectional replication
<a name="CHAP_Task.CDC.Bidirectional.Limitations"></a>

Bidirectional replication for AWS DMS has the following limitations:
+ Loopback prevention tracks only data manipulation language (DML) statements. AWS DMS doesn't support preventing data definition language (DDL) loopback. To do this, configure one of the tasks in a bidirectional pair to filter out DDL statements.
+ Tasks that use loopback prevention don't support committing changes in batches. To configure a task with loopback prevention, make sure to set `BatchApplyEnabled` to `false`.
+ DMS bidirectional replication doesn't include conflict detection or resolution. To detect data inconsistencies, use data validation on both tasks. 
+ `setUpMsCdcForTables` must be set to `true` for SQL Server source endpoint to set bidirectional replication.

# Modifying a task
<a name="CHAP_Tasks.Modifying"></a>

You can modify a task if you need to change the task settings, table mapping, or other settings. You can also enable and run premigration assessments before running the modified task. You can modify a task in the console by selecting the task and choosing **Modify**. You can also use the CLI command or API operation [ModifyReplicationTask](https://docs.aws.amazon.com/dms/latest/APIReference/API_ModifyReplicationTask.html). 

There are a few limitations to modifying a task. These include the following:
+ You can't modify the source or target endpoint of a task.
+ You can't change the migration type of a task.
+ Tasks that have run must have a status of **Stopped** or **Failed** to be modified.

# Moving a task
<a name="CHAP_Tasks.Moving"></a>

You can move a task to a different replication instance when any of the following situations apply to your use case.
+ You're currently using an instance of a certain type and you want to switch to a different instance type.
+ Your current instance is overloaded by many replication tasks, and you want to split the load across multiple instances.
+ Your instance storage is full, and you want to move tasks off that instance to a more powerful instance as an alternative to scaling storage or compute.
+ You want to use a newly released feature of AWS DMS, but don’t want to create a new task and restart the migration. Instead, you prefer to spin up a replication instance with a new AWS DMS version that supports the feature, and move the existing task to that instance.

You can move a task in the console by selecting the task and choosing **Move**. You can also run the CLI command or API operation `MoveReplicationTask` to move the task. You can move a task that has any database engine as its target endpoint.

Make sure that the target replication instance has enough storage space to accommodate the task that's being moved. Otherwise, scale the storage to make space for your target replication instance before moving the task.

Also, make sure that your target replication instance is created with the same or higher AWS DMS engine version as the current replication instance. 

**Note**  
You can't move a task to the same replication instance where it currently resides.
You can't modify the settings of a task while it's moving.
A task you have run must have a status of **Stopped**, **Failed**, or **Failed-move** before you can move it.

There are two task statuses that relate to moving a DMS task, **Moving **and **Failed-move**. For more information about those task status, see [Task status](CHAP_Monitoring.md#CHAP_Tasks.Status). 

After moving a task, you can enable and run premigration assessments to check for blocking issues before running the moved task.

# Reloading tables during a task
<a name="CHAP_Tasks.ReloadTables"></a>

While a task is running, you can reload a target database table using data from the source. You might want to reload a table if, during the task, an error occurs or data changes due to partition operations (for example, when using Oracle). You can reload up to 10 tables from a task.

Reloading tables does not stop the task.

To reload a table, the following conditions must apply:
+ The task must be running.
+ The migration method for the task must be either full load or full load with CDC.
+ Duplicate tables aren't allowed.
+ AWS DMS retains the previously read table definition and doesn't recreate it during the reload operation. Any DDL statements such as ALTER TABLE ADD COLUMN or DROP COLUMN that are made to the table before the table is reloaded can cause the reload operation to fail. 

**Note**  
DMS applies the `TargetTablePrepMode` setting before reloading the table. If you set `TargetTablePrepMode` to `DO_NOTHING`, you must manually truncate the table first.

## AWS Management Console
<a name="CHAP_Tasks.ReloadTables.CON"></a>

**To reload a table using the AWS DMS console**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/). 

   If you are signed in as an IAM user, make sure that you have the appropriate permissions to access AWS DMS. For more information about the permissions required, see [IAM permissions needed to use AWS DMS](security-iam.md#CHAP_Security.IAMPermissions).

1. Choose **Tasks** from the navigation pane. 

1. Choose the running task that has the table you want to reload. 

1. Choose the **Table Statistics** tab.  
![\[AWS DMS monitoring\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-reloading1.png)

1. Choose the table you want to reload. If the task is no longer running, you can't reload the table.

1. Choose **Reload table data**.

When AWS DMS is preparing to reload a table, the console changes the table status to **Table is being reloaded**.

# Using table mapping to specify task settings
<a name="CHAP_Tasks.CustomizingTasks.TableMapping"></a>

Table mapping uses several types of rules to specify the data source, source schema, data, and any transformations that should occur during the task. You can use table mapping to specify individual tables in a database to migrate and the schema to use for the migration. 

When working with table mapping, you can use filters to specify data that you want replicated from table columns. In addition, you can use transformations to modify selected schemas, tables, or views before they are written to the target database.

**Topics**
+ [

# Specifying table selection and transformations rules from the console
](CHAP_Tasks.CustomizingTasks.TableMapping.Console.md)
+ [

# Specifying table selection and transformations rules using JSON
](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.md)
+ [

# Selection rules and actions
](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.md)
+ [

# Wildcards in table mapping
](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Wildcards.md)
+ [

# Transformation rules and actions
](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.md)
+ [

# Using transformation rule expressions to define column content
](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions.md)
+ [

# Table and collection settings rules and operations
](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md)
+ [

# Using data masking to hide sensitive information
](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Masking.md)

**Note**  
When working with table mapping for a MongoDB source endpoint, you can use filters to specify data that you want replicated, and specify a database name in place of the `schema_name`. Or, you can use the default `"%"`.

# Specifying table selection and transformations rules from the console
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.Console"></a>

You can use the AWS Management Console to perform table mapping, including specifying table selection and transformations. On the console, use the **Where** section to specify the schema, table, and action (include or exclude). Use the **Filter** section to specify the column name in a table and the conditions that you want to apply to a replication task. Together, these two actions create a selection rule.

You can include transformations in a table mapping after you have specified at least one selection rule. You can use transformations to rename a schema or table, add a prefix or suffix to a schema or table, or remove a table column.

**Note**  
AWS DMS doesn't support more than one transformation rule per schema level, table level, or column level.

The following procedure shows how to set up selection rules, based on a table called **Customers** in a schema called **EntertainmentAgencySample**. 

**To specify a table selection, filter criteria, and transformations using the console**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/). 

   If you are signed in as an IAM user, make sure that you have the appropriate permissions to access AWS DMS. For more information about the permissions required, see [IAM permissions needed to use AWS DMS](security-iam.md#CHAP_Security.IAMPermissions).

1. On the **Dashboard** page, choose **Database migration tasks**.

1. Choose **Create Task**.

1. In the **Task configuration** section, enter the task information, including **Task identifier**, **Replication instance**, **Source database endpoint**, **Target database endpoint**, and **Migration type**.   
![\[Schema and table selection\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-create-task-20.png)

1. In the **Table mapping** section, enter the schema name and table name. You can use "%" as a wildcard value when specifying the schema name or the table name. For information about other wildcards you can use, see [Wildcards in table mapping](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Wildcards.md). Specify the action to be taken, to include or exclude data defined by the filter.   
![\[Schema and table selection\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-Tasks-selecttransfrm.png)

1. Specify filter information using the **Add column filter** and the **Add condition **links.

   1. Choose **Add column filter** to specify a column and conditions.

   1. Choose **Add condition ** to add additional conditions.

    The following example shows a filter for the **Customers** table that includes **AgencyIDs** between **01** and **85**.  
![\[Schema and table selection\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-Tasks-filter.png)

1. When you have created the selections you want, choose **Add new selection rule**.

1. After you have created at least one selection rule, you can add a transformation to the task. Choose **add transformation rule**.  
![\[transformation rule\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-Tasks-transform1.png)

1. Choose the target that you want to transform, and enter the additional information requested. The following example shows a transformation that deletes the **AgencyStatus** column from the **Customer** table.  
![\[transformation rule\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-Tasks-transform2.png)

1. Choose **Add transformation rule**.

1. Choose **Create task**.

**Note**  
AWS DMS doesn't support more than one transformation rule per schema level, table level, or column level.

# Specifying table selection and transformations rules using JSON
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation"></a>

To specify the table mappings that you want to apply during migration, you can create a JSON file. If you create a migration task using the console, you can browse for this JSON file or enter the JSON directly into the table mapping box. If you use the CLI or API to perform migrations, you can specify this file using the `TableMappings` parameter of the `CreateReplicationTask` or `ModifyReplicationTask` API operation. 

AWS DMS can only process table mapping JSON files up to 2 MB in size. We recommend that you keep the mapping rule JSON file size below the 2 MB limit while working with DMS tasks. This prevents unexpected errors during task creation or modification. When a mapping rule file exceeds the 2 MB limit, we recommend that you split the tables across multiple tasks to reduce the size of the mapping rule file so that it stays below this limit.

You can specify what tables, views, and schemas you want to work with. You can also perform table, view, and schema transformations and specify settings for how AWS DMS loads individual tables and views. You create table-mapping rules for these options using the following rule types:
+ `selection` rules – Identify the types and names of source tables, views, and schemas to load. For more information, see [Selection rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.md).
+ `transformation` rules – Specify certain changes or additions to particular source tables and schemas on the source before they are loaded on the target. For more information, see [Transformation rules and actions](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.md).

  Also, to define content of new and existing columns, you can use an expression within a transformation rule. For more information, see [Using transformation rule expressions to define column content](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions.md).
+ `table-settings` rules – Specify how DMS tasks load the data for individual tables. For more information, see [Table and collection settings rules and operations](CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.md).

**Note**  
For Amazon S3 targets, you can also tag S3 objects mapped to selected tables and schemas using the `post-processing` rule type and the `add-tag` rule action. For more information, see [Amazon S3 object tagging](CHAP_Target.S3.md#CHAP_Target.S3.Tagging).  
For the targets following, you can specify how and where selected schemas and tables are migrated to the target using the `object-mapping` rule type:  
Amazon DynamoDB – For more information, see [Using object mapping to migrate data to DynamoDB](CHAP_Target.DynamoDB.md#CHAP_Target.DynamoDB.ObjectMapping).
Amazon Kinesis – For more information, see [Using object mapping to migrate data to a Kinesis data stream](CHAP_Target.Kinesis.md#CHAP_Target.Kinesis.ObjectMapping).
Apache Kafka – For more information, see [Using object mapping to migrate data to a Kafka topic](CHAP_Target.Kafka.md#CHAP_Target.Kafka.ObjectMapping).

# Selection rules and actions
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections"></a>

Using table mapping, you can specify what tables, views, and schemas you want to work with by using selection rules and actions. For table-mapping rules that use the selection rule type, you can apply the following values. 

**Warning**  
Do not to include any sensitive data within these rules.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Selections.html)

**Example Migrate all tables in a schema**  
The following example migrates all tables from a schema named `Test` in your source to your target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "%"
            },
            "rule-action": "include"
        }
    ]
}
```

**Example Migrate some tables in a schema**  
The following example migrates all tables except those starting with `DMS` from a schema named `Test` in your source to your target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "selection",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "DMS%"
            },
            "rule-action": "exclude"
        }
    ]
}
```

**Example Migrate a specified single table in single schema**  
The following example migrates the `Customer` table from the `NewCust` schema in your source to your target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "NewCust",
                "table-name": "Customer"
            },
            "rule-action": "explicit"
        }
    ]
}
```
You can explicitly select on multiple tables and schemas by specifying multiple selection rules.

**Example Migrate tables in a set order**  
Tables and views are migrated according to their load-order values, with higher values receiving priority in the migration sequence. The following example migrates two tables, `loadfirst` with a priority value of 2 and `loadsecond` with a priority value of 1, the migration task would first process the `loadfirst` table before proceeding to the `loadsecond` table. This prioritization mechanism ensures that dependencies between database objects are respected during the migration process.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "loadsecond"
            },
            "rule-action": "include",
            "load-order": "1"
        },
        {
            "rule-type": "selection",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "loadfirst"
            },
            "rule-action": "include",
            "load-order": "2"
        }
    ]
}
```

**Note**  
`load-order` is applicable for table initialization. The load of a successive table won't wait for a previous table load to complete if `MaxFullLoadSubTasks` is greater than 1.

**Example Migrate some views in a schema**  
The following example migrates some views from a schema named `Test` in your source to equivalent tables in your target.  

```
{
   "rules": [
        {
           "rule-type": "selection",
           "rule-id": "2",
           "rule-name": "2",
           "object-locator": {
               "schema-name": "Test",
               "table-name": "view_DMS%",
               "table-type": "view"
            },
           "rule-action": "include"
        }
    ]
}
```

**Example Migrate all tables and views in a schema**  
The following example migrates all tables and views from a schema named `report` in your source to equivalent tables in your target.  

```
{
   "rules": [
        {
           "rule-type": "selection",
           "rule-id": "3",
           "rule-name": "3",
           "object-locator": {
               "schema-name": "report",
               "table-name": "%",
               "table-type": "all"
            },
           "rule-action": "include"
        }
    ]
}
```

# Wildcards in table mapping
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Wildcards"></a>

This section describes wildcards you can use when specifying the schema and table names for table mapping.


| Wildcard | Matches | 
| --- |--- |
| % | Zero or more characters | 
| \$1 | A single character | 
| [\$1] | A literal underscore character | 
| [ab] | A set of characters. For example, [ab] matches either 'a' or 'b'. | 
| [a-d] | A range of characters. For example,[a-d] matches either 'a', 'b', 'c', or 'd'. | 

For Oracle source and target endpoints, you can use the `escapeCharacter` extra connection attribute to specify an escape character. An escape character allows you to use a specified wildcard character in expressions as if it was not wild. For example, `escapeCharacter=#` allows you to use '\$1' to make a wildcard character act as an ordinary character in an expression as in the this sample code.

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "542485267",
            "rule-name": "542485267",
            "object-locator": { "schema-name": "ROOT", "table-name": "TEST#_T%" },
            "rule-action": "include",
            "filters": []
        }
    ]
}
```

Here, the '\$1' escape character makes the '\$1' wildcard character act as a normal character. AWS DMS selects tables in the schema named `ROOT`, where each table has a name with `TEST_T` as its prefix.

# Transformation rules and actions
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations"></a>

You use the transformation actions to specify any transformations you want to apply to the selected schema, table, or view. Transformation rules are optional. 

## Limitations
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.Limitations"></a>
+ You cannot apply more than one transformation rule action against the same object (schema, table, column, table-tablespace, or index-tablespace). You can apply several transformation rule actions on any level as long as each transformation action is applied against a different object. However, this restriction is not applicable when using data masking transformation rules where you can have another transformation like `ADD-COLUMN` or `CHANGE-DATA-TYPE` for the same column.
+ Table names and column names in transformation rules are case-sensitive. For example, you must provide table names and column names for an Oracle or Db2 database in upper-case.
+ Transformations are not supported for column names with Right-to-Left languages.
+ Transformations cannot be performed on columns that contain special characters (e.g. \$1, \$1, /, -) in their name.
+ The only supported transformation for columns that are mapped to BLOB/CLOB data types is to drop the column on the target.
+ AWS DMS doesn't support replicating two source tables to a single target table. AWS DMS replicates records from table to table, and from column to column, according to the replication task’s transformation rules. The object names must be unique to prevent overlapping.

  For example, a source table has a column named `ID` and the corresponding target table has a pre-existing column called `id`. If a rule uses an `ADD-COLUMN` statement to add a new column called `id`, and a SQLite statement to populate the column with custom values, this creates a duplicate, ambiguous object named `id` and is not supported. 
+ When creating a transformation rule, we recommend using the `data-type` parameter only when the selection rules specify multiple columns, for instance, when you set `column-name` to `%`. We don't recommend using `data-type` for selecting a single column.
+ AWS DMS does not support transformation rules where source and target objects (tables) are on the same database/schema. Using the same table as both source and target in a transformation rule can lead to unexpected and potentially harmful results, including but not limited to unintended alterations to the table data, modification of table structures or even tables getting dropped.

## Values
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.Values"></a>

For table-mapping rules that use the transformation rule type, you can apply the following values. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.html)

## Examples
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.Examples"></a>

**Example Rename a schema**  
The following example renames a schema from `Test` in your source to `Test1` in your target.  

```
{

    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "rename",
            "rule-target": "schema",
            "object-locator": {
                "schema-name": "Test"
            },
            "value": "Test1"
        }
    ]
}
```

**Example Rename a table**  
The following example renames a table from `Actor` in your source to `Actor1` in your target.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "rename",
            "rule-target": "table",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "Actor"
            },
            "value": "Actor1"
        }
    ]
}
```

**Example Rename a column**  
The following example renames a column in table `Actor` from `first_name` in your source to `fname` in your target.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
         {
            "rule-type": "transformation",
            "rule-id": "4",
            "rule-name": "4",
            "rule-action": "rename",
            "rule-target": "column",
            "object-locator": {
                "schema-name": "test",
                "table-name": "Actor",
                "column-name" : "first_name"
            },
            "value": "fname"
        }
    ]
}
```

**Example Rename an Oracle table tablespace**  
The following example renames the table tablespace named `SetSpace` for a table named `Actor` in your Oracle source to `SceneTblSpace` in your Oracle target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Play",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "rename",
            "rule-target": "table-tablespace",
            "object-locator": {
                "schema-name": "Play",
                "table-name": "Actor",
                "table-tablespace-name": "SetSpace"
            },
            "value": "SceneTblSpace"
        }
    ]
}
```

**Example Rename an Oracle index tablespace**  
The following example renames the index tablespace named `SetISpace` for a table named `Actor` in your Oracle source to `SceneIdxSpace` in your Oracle target endpoint.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Play",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "rename",
            "rule-target": "table-tablespace",
            "object-locator": {
                "schema-name": "Play",
                "table-name": "Actor",
                "table-tablespace-name": "SetISpace"
            },
            "value": "SceneIdxSpace"
        }
    ]
}
```

**Example Add a column**  
The following example adds a `datetime` column to the table `Actor` in schema `test`.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "add-column",
            "rule-target": "column",
            "object-locator": {
                "schema-name": "test",
                "table-name": "actor"
            },
            "value": "last_updated",
            "data-type": {
                "type": "datetime",
                "precision": 6
            }
        }
    ]
}
```

**Example Remove a column**  
The following example transforms the table named `Actor` in your source to remove all columns starting with the characters `col` from it in your target.  

```
{
 	"rules": [{
		"rule-type": "selection",
		"rule-id": "1",
		"rule-name": "1",
		"object-locator": {
			"schema-name": "test",
			"table-name": "%"
		},
		"rule-action": "include"
	}, {
		"rule-type": "transformation",
		"rule-id": "2",
		"rule-name": "2",
		"rule-action": "remove-column",
		"rule-target": "column",
		"object-locator": {
			"schema-name": "test",
			"table-name": "Actor",
			"column-name": "col%"
		}
	}]
 }
```

**Example Convert to lowercase**  
The following example converts a table name from `ACTOR` in your source to `actor` in your target.  

```
{
	"rules": [{
		"rule-type": "selection",
		"rule-id": "1",
		"rule-name": "1",
		"object-locator": {
			"schema-name": "test",
			"table-name": "%"
		},
		"rule-action": "include"
	}, {
		"rule-type": "transformation",
		"rule-id": "2",
		"rule-name": "2",
		"rule-action": "convert-lowercase",
		"rule-target": "table",
		"object-locator": {
			"schema-name": "test",
			"table-name": "ACTOR"
		}
	}]
}
```

**Example Convert to uppercase**  
The following example converts all columns in all tables and all schemas from lowercase in your source to uppercase in your target.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "convert-uppercase",
            "rule-target": "column",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%",
                "column-name": "%"
            }
        }
    ]
}
```

**Example Add a prefix**  
The following example transforms all tables in your source to add the prefix `DMS_` to them in your target.  

```
{
 	"rules": [{
		"rule-type": "selection",
		"rule-id": "1",
		"rule-name": "1",
		"object-locator": {
			"schema-name": "test",
			"table-name": "%"
		},
		"rule-action": "include"
	}, {
		"rule-type": "transformation",
		"rule-id": "2",
		"rule-name": "2",
		"rule-action": "add-prefix",
		"rule-target": "table",
		"object-locator": {
			"schema-name": "test",
			"table-name": "%"
		},
		"value": "DMS_"
	}]
 
}
```

**Example Replace a prefix**  
The following example transforms all columns containing the prefix `Pre_` in your source to replace the prefix with `NewPre_` in your target.  

```
{
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "replace-prefix",
            "rule-target": "column",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%",
                "column-name": "%"
            },
            "value": "NewPre_",
            "old-value": "Pre_"
        }
    ]
}
```

**Example Remove a suffix**  
The following example transforms all tables in your source to remove the suffix `_DMS` from them in your target.  

```
{
	"rules": [{
		"rule-type": "selection",
		"rule-id": "1",
		"rule-name": "1",
		"object-locator": {
			"schema-name": "test",
			"table-name": "%"
		},
		"rule-action": "include"
	}, {
		"rule-type": "transformation",
		"rule-id": "2",
		"rule-name": "2",
		"rule-action": "remove-suffix",
		"rule-target": "table",
		"object-locator": {
			"schema-name": "test",
			"table-name": "%"
		},
		"value": "_DMS"
	}]
}
```

**Example Define a primary key**  
The following example defines a primary key named `ITEM-primary-key` on three columns of the `ITEM` table migrated to your target endpoint.  

```
{
	"rules": [{
		"rule-type": "selection",
		"rule-id": "1",
		"rule-name": "1",
		"object-locator": {
			"schema-name": "inventory",
			"table-name": "%"
		},
		"rule-action": "include"
	}, {
		"rule-type": "transformation",
		"rule-id": "2",
		"rule-name": "2",
		"rule-action": "define-primary-key",
		"rule-target": "table",
		"object-locator": {
			"schema-name": "inventory",
			"table-name": "ITEM"
		},
		"primary-key-def": {
			"name": "ITEM-primary-key",
			"columns": [
				"ITEM-NAME",
				"BOM-MODEL-NUM",
				"BOM-PART-NUM"
			]
              }
	}]
}
```

**Example Define a unique index**  
The following example defines a unique index named `ITEM-unique-idx` on three columns of the `ITEM` table migrated to your target endpoint.  

```
{
	"rules": [{
		"rule-type": "selection",
		"rule-id": "1",
		"rule-name": "1",
		"object-locator": {
			"schema-name": "inventory",
			"table-name": "%"
		},
		"rule-action": "include"
	}, {
		"rule-type": "transformation",
		"rule-id": "2",
		"rule-name": "2",
		"rule-action": "define-primary-key",
		"rule-target": "table",
		"object-locator": {
			"schema-name": "inventory",
			"table-name": "ITEM"
		},
		"primary-key-def": {
			"name": "ITEM-unique-idx",
			"origin": "unique-index",
			"columns": [
				"ITEM-NAME",
				"BOM-MODEL-NUM",
				"BOM-PART-NUM"
			]
              }
	}]
}
```

**Example Change data type of target column**  
The following example changes the data type of a target column named `SALE_AMOUNT` from an existing data type to `int8`.  

```
{
    "rule-type": "transformation",
    "rule-id": "1",
    "rule-name": "RuleName 1",
    "rule-action": "change-data-type",
    "rule-target": "column",
    "object-locator": {
        "schema-name": "dbo",
        "table-name": "dms",
        "column-name": "SALE_AMOUNT"
    },
    "data-type": {
        "type": "int8"
    }
}
```

**Example Add a before image column**  
For a source column named `emp_no`, the transformation rule in the example following adds a new column named `BI_emp_no` in the target.  

```
{
	"rules": [{
			"rule-type": "selection",
			"rule-id": "1",
			"rule-name": "1",
			"object-locator": {
				"schema-name": "%",
				"table-name": "%"
			},
			"rule-action": "include"
		},
		{
			"rule-type": "transformation",
			"rule-id": "2",
			"rule-name": "2",
			"rule-target": "column",
			"object-locator": {
				"schema-name": "%",
				"table-name": "employees"
			},
			"rule-action": "add-before-image-columns",
			"before-image-def": {
				"column-prefix": "BI_",
				"column-suffix": "",
				"column-filter": "pk-only"
			}
		}
	]
}
```
Here, the following statement populates a `BI_emp_no` column in the corresponding row with 1.  

```
UPDATE employees SET emp_no = 3 WHERE BI_emp_no = 1;
```
When writing CDC updates to supported AWS DMS targets, the `BI_emp_no` column makes it possible to tell which rows have updated values in the `emp_no` column.

# Using transformation rule expressions to define column content
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions"></a>

To define content for new and existing columns, you can use an expression within a transformation rule. For example, using expressions you can add a column or replicate source table headers to a target. You can also use expressions to flag records on target tables as inserted, updated, or deleted at the source. 

**Topics**
+ [

## Adding a column using an expression
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-adding)
+ [

## Flagging target records using an expression
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-Flagging)
+ [

## Replicating source table headers using expressions
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-Headers)
+ [

## Using SQLite functions to build expressions
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-SQLite)
+ [

## Adding metadata to a target table using expressions
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-Metadata)

## Adding a column using an expression
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-adding"></a>

To add columns to tables using an expression in a transformation rule, use an `add-column` rule action and a `column` rule target.

The following example adds a new column to the `ITEM` table. It sets the new column name to `FULL_NAME`, with a data type of `string`, 50 characters long. The expression concatenates the values of two existing columns, `FIRST_NAME` and `LAST_NAME`, to evaluate to `FULL_NAME`. The `schema-name`, `table-name`, and expression parameters refer to objects in the source database table. `Value` and the `data-type` block refer to objects in the target database table.

```
{
    "rules": [
        {
            "rule-type": "selection", 
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-action": "add-column",
            "rule-target": "column",
            "object-locator": {
                "schema-name": "Test",
                "table-name": "ITEM"
            },
            "value": "FULL_NAME",
            "expression": "$FIRST_NAME||'_'||$LAST_NAME",
            "data-type": {
                 "type": "string",
                 "length": 50
            }
        }
    ]
}
```

## Flagging target records using an expression
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-Flagging"></a>

To flag records in target tables as inserted, updated, or deleted in the source table, use an expression in a transformation rule. The expression uses an `operation_indicator` function to flag records. Records deleted from the source aren't deleted from the target. Instead, the target record is flagged with a user-provided value to indicate that it was deleted from the source.

**Note**  
The `operation_indicator` function works only on tables that have a primary key on both source and target database. 

For example, the following transformation rule first adds a new `Operation` column to a target table. It then updates the column with the value `D` whenever a record is deleted from a source table.

```
{
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-target": "column",
      "object-locator": {
        "schema-name": "%",
        "table-name": "%"
      },
      "rule-action": "add-column",
      "value": "Operation",
      "expression": "operation_indicator('D', 'U', 'I')",
      "data-type": {
        "type": "string",
        "length": 50
      }
}
```

## Replicating source table headers using expressions
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-Headers"></a>

By default, headers for source tables aren't replicated to the target. To indicate which headers to replicate, use a transformation rule with an expression that includes the table column header. 

You can use the following column headers in expressions. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions.html)

The following example adds a new column to the target by using the stream position value from the source. For SQL Server, the stream position value is the LSN for the source endpoint. For Oracle, the stream position value is the SCN for the source endpoint.

```
{
      "rule-type": "transformation",
     "rule-id": "2",
      "rule-name": "2",
      "rule-target": "column",
      "object-locator": {
        "schema-name": "%",
        "table-name": "%"
      },
      "rule-action": "add-column",
      "value": "transact_id",
      "expression": "$AR_H_STREAM_POSITION",
      "data-type": {
        "type": "string",
        "length": 50
      }
    }
```

The following example adds a new column to the target that has a unique incrementing number from the source. This value represents a 35 digit unique number at task level. The first 16 digits are part of a timestamp, and the last 19 digits are the record\$1id number incremented by the DBMS.

```
{
"rule-type": "transformation",
"rule-id": "2",
"rule-name": "2",
"rule-target": "column",
"object-locator": {
"schema-name": "%",
"table-name": "%"
},
"rule-action": "add-column",
"value": "transact_id",
"expression": "$AR_H_CHANGE_SEQ",
"data-type": {
"type": "string",
"length": 50
}
}
```

## Using SQLite functions to build expressions
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-SQLite"></a>

You use table settings to specify any settings that you want to apply to the selected table or view for a specified operation. Table-settings rules are optional. 

**Note**  
Instead of the concept of tables and views, MongoDB and DocumentDB databases store data records as documents that are gathered together in *collections*. So then, when migrating from a MongoDB or DocumentDB source, consider the range segmentation type of parallel load settings for selected *collections* rather than tables and views.

**Topics**
+ [

### Using a CASE expression
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-SQLite.CASE)
+ [

### Examples
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-SQLite.Ex)

Following, you can find string functions that you can use to build transformation rule expressions.


| String functions | Description | 
| --- | --- | 
|  `lower(x)`  |  The `lower(x)` function returns a copy of string *`x`* with all characters converted to lowercase. The default, built-in `lower` function works for ASCII characters only.  | 
|  `upper(x)`  |  The `upper(x)` function returns a copy of string *`x`* with all characters converted to uppercase. The default, built-in `upper` function works for ASCII characters only.  | 
|  `ltrim(x,y)`  |  The `ltrim(x,y)` function returns a string formed by removing all characters that appear in y from the left side of x. If there is no value for y, `ltrim(x)` removes spaces from the left side of x.  | 
|  `replace(x,y,z)`  |  The `replace(x,y,z)` function returns a string formed by substituting string z for every occurrence of string y in string x.  | 
| `rtrim(x,y)` |  The `rtrim(x,y)` function returns a string formed by removing all characters that appear in y from the right side of x. If there is no value for y, `rtrim(x)` removes spaces from the right side of x.  | 
| `substr(x,y,z)` |  The `substr(x,y,z)` function returns a substring of the input string `x` that begins with the `y`th character, and which is *`z`* characters long.  If *`z`* is omitted, `substr(x,y)` returns all characters through the end of string `x` beginning with the `y`th character. The leftmost character of `x` is number 1. If *`y`* is negative, the first character of the substring is found by counting from the right rather than the left. If *`z`* is negative, then the `abs(z)` characters preceding the `y`th character are returned. If `x` is a string, then the characters' indices refer to actual UTF-8 characters. If `x` is a BLOB, then the indices refer to bytes.  | 
| trim(x,y) |  The `trim(x,y)` function returns a string formed by removing all characters that appear in `y` from both sides of `x`. If there is no value for `y`, `trim(x)` removes spaces from both sides of `x`.  | 

Following, you can find LOB functions that you can use to build transformation rule expressions.


| LOB functions | Description | 
| --- | --- | 
|  `hex(x)`  |  The `hex` function receives a BLOB as an argument and returns an uppercase hexadecimal string version of the BLOB content.  | 
|  `randomblob (N)`  |  The `randomblob(N)` function returns an `N`-byte BLOB that contains pseudorandom bytes. If *N* is less than 1, a 1-byte random BLOB is returned.   | 
|  `zeroblob(N)`  |  The `zeroblob(N)` function returns a BLOB that consists of `N` bytes of 0x00.  | 

Following, you can find numeric functions that you can use to build transformation rule expressions.


| Numeric functions | Description | 
| --- | --- | 
|  `abs(x)`  |  The `abs(x)` function returns the absolute value of the numeric argument `x`. The `abs(x)` function returns NULL if *x* is NULL. The `abs(x)` function returns 0.0 if **x** is a string or BLOB that can't be converted to a numeric value.  | 
|  `random()`  |  The `random` function returns a pseudorandom integer between -9,223,372,036,854,775,808 and \$19,223,372,036,854,775,807.  | 
|  `round (x,y)`  |  The `round (x,y)` function returns a floating-point value *x* rounded to *y* digits to the right of the decimal point. If there is no value for *y*, it's assumed to be 0.  | 
|  `max (x,y...)`  |  The multiargument `max` function returns the argument with the maximum value, or returns NULL if any argument is NULL.  The `max` function searches its arguments from left to right for an argument that defines a collating function. If one is found, it uses that collating function for all string comparisons. If none of the arguments to `max` define a collating function, the `BINARY` collating function is used. The `max` function is a simple function when it has two or more arguments, but it operates as an aggregate function if it has a single argument.  | 
|  `min (x,y...)`  |  The multiargument `min` function returns the argument with the minimum value.  The `min` function searches its arguments from left to right for an argument that defines a collating function. If one is found, it uses that collating function for all string comparisons. If none of the arguments to `min` define a collating function, the `BINARY` collating function is used. The `min` function is a simple function when it has two or more arguments, but it operates as an aggregate function if it has a single argument.   | 

Following, you can find NULL check functions that you can use to build transformation rule expressions.


| NULL check functions | Description | 
| --- | --- | 
|  `coalesce (x,y...)`  |  The `coalesce` function returns a copy of its first non-NULL argument, but it returns NULL if all arguments are NULL. The coalesce function has at least two arguments.  | 
|  `ifnull(x,y)`  |  The `ifnull` function returns a copy of its first non-NULL argument, but it returns NULL if both arguments are NULL. The `ifnull` function has exactly two arguments. The `ifnull` function is the same as `coalesce` with two arguments.  | 
|  `nullif(x,y)`  |  The `nullif(x,y)` function returns a copy of its first argument if the arguments are different, but it returns NULL if the arguments are the same.  The `nullif(x,y)` function searches its arguments from left to right for an argument that defines a collating function. If one is found, it uses that collating function for all string comparisons. If neither argument to nullif defines a collating function, then the `BINARY` collating function is used.  | 

Following, you can find date and time functions that you can use to build transformation rule expressions.


| Date and time functions | Description | 
| --- | --- | 
|  `date(timestring, modifier, modifier...)`  |  The `date` function returns the date in the format YYYY-MM-DD.  | 
|  `time(timestring, modifier, modifier...)`  |  The `time` function returns the time in the format HH:MM:SS.  | 
|  `datetime(timestring, modifier, modifier...)`  |  The `datetime` function returns the date and time in the format YYYY-MM-DD HH:MM:SS.  | 
|  `julianday(timestring, modifier, modifier...)`  |  The `julianday` function returns the number of days since noon in Greenwich on November 24, 4714 B.C.  | 
|  `strftime(format, timestring, modifier, modifier...)`  |  The `strftime` function returns the date according to the format string specified as the first argument, using one of the following variables: `%d`: day of month `%H`: hour 00–24 `%f`: \$1\$1 fractional seconds SS.SSS `%j`: day of year 001–366 `%J`: \$1\$1 Julian day number `%m`: month 01–12 `%M`: minute 00–59 `%s`: seconds since 1970-01-01 `%S`: seconds 00–59 `%w`: day of week 0–6 sunday==0 `%W`: week of year 00–53 `%Y`: year 0000–9999 `%%`: %  | 

Following, you can find a hash function that you can use to build transformation rule expressions.


| Hash function | Description | 
| --- | --- | 
|  `hash_sha256(x)`  |  The `hash` function generates a hash value for an input column (using the SHA-256 algorithm) and returns the hexadecimal value of the generated hash value.  To use the `hash` function in an expression, add `hash_sha256(x)` to the expression and replace *`x`* with the source column name.  | 

### Using a CASE expression
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-SQLite.CASE"></a>

The SQLite `CASE` expression evaluates a list of conditions and returns an expression based on the result. Syntax is shown following.

```
    CASE case_expression
     WHEN when_expression_1 THEN result_1
     WHEN when_expression_2 THEN result_2
     ...
     [ ELSE result_else ] 
    END

# Or 

     CASE
     WHEN case_expression THEN result_1
     WHEN case_expression THEN result_2
     ...
     [ ELSE result_else ] 
    END
```

### Examples
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-SQLite.Ex"></a>

**Example of adding a new string column to the target table using a case condition**  
The following example transformation rule adds a new string column, `emp_seniority`, to the target table, `employee`. It uses the SQLite `round` function on the salary column, with a case condition to check if the salary equals or exceeds 20,000. If it does, the column gets the value `SENIOR`, and anything else has the value `JUNIOR`.  

```
  {
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-action": "add-column",
      "rule-target": "column",
      "object-locator": {
        "schema-name": "public",
        "table-name": "employee"
      },
      "value": "emp_seniority",
      "expression": " CASE WHEN round($emp_salary)>=20000 THEN ‘SENIOR’ ELSE ‘JUNIOR’ END",
      "data-type": {
        "type": "string",
        "length": 50
      }

  }
```

**Example of adding a new string column to the target table using a SUBSTR function**  
The following example transformation rule adds a new string column using SQLite operators or functions to define the data in a column. This approach involves using SQLite functions to transform the GUID data loaded from Oracle to UUID format before inserting it into the Postgresql target table.  
Following rule uses the SQLite substring (SUBSTR), hexadecimal function (HEX), and lowercase (LOWER) functions to break the GUID data into several groups separated by hyphens, specifically a group of 8 digits followed by three groups of 4 digits followed by a group of 12 digits, for a total of 32 digits representing the 128 bits.  
Here is the sample source data and output on target post processing through transformation rule:  
**Source Table (Oracle GUID format)**    
T\$1COL2  

```
06F6949D234911EE80670242AC120002
1A2B3C4D5E6F11EE80670242AC120003
F5E4D3C2B1A011EE80670242AC120004
```
**Target Table (PostgreSQL UUID format)**    
T\$1COL2\$1TMP  

```
06f6949d-2349-11ee-8067-0242ac120002
1a2b3c4d-5e6f-11ee-8067-0242ac120003
f5e4d3c2-b1a0-11ee-8067-0242ac120004
```

```
{
  "rule-type": "transformation",
  "rule-id": "2",
  "rule-name": "2",
  "rule-action": "add-column",
  "rule-target": "column",
  "object-locator": {
    "schema-name": "SPORTS",
    "table-name": "TEST_TBL_2"
  },
  "value": "t_col2_tmp",
  "expression": "CASE LOWER(SUBSTR(HEX($T_COL2), 1, 8) || '-' || SUBSTR(HEX($T_COL2), 9, 4) || '-' || SUBSTR(HEX($T_COL2), 13, 4) || '-' || SUBSTR(HEX($T_COL2), 17, 4) || '-' || SUBSTR(HEX($T_COL2), 21, 12)) WHEN '----' THEN NULL ELSE LOWER(SUBSTR(HEX($T_COL2), 1, 8) || '-' || SUBSTR(HEX($T_COL2), 9, 4) || '-' || SUBSTR(HEX($T_COL2), 13, 4) || '-' || SUBSTR(HEX($T_COL2), 17, 4) || '-' || SUBSTR(HEX($T_COL2), 21, 12)) END",
  "data-type": {
    "type": "string",
    "length": 60
  }
}
```

**Example of adding a new date column to the target table**  
The following example adds a new date column, `createdate`, to the target table, `employee`. When you use the SQLite date function `datetime`, the date is added to the newly created table for each row inserted.  

```
  {
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-action": "add-column",
      "rule-target": "column",
      "object-locator": {
        "schema-name": "public",
        "table-name": "employee"
      },
      "value": "createdate",
      "expression": "datetime ()",
      "data-type": {
        "type": "datetime",
        "precision": 6
      }
  }
```

**Example of adding a new numeric column to the target table**  
The following example adds a new numeric column, `rounded_emp_salary`, to the target table, `employee`. It uses the SQLite `round` function to add the rounded salary.   

```
  {
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-action": "add-column",
      "rule-target": "column",
      "object-locator": {
        "schema-name": "public",
        "table-name": "employee"
      },
      "value": "rounded_emp_salary",
      "expression": "round($emp_salary)",
      "data-type": {
        "type": "int8"
      }
  }
```

**Example of adding a new string column to the target table using the hash function**  
The following example adds a new string column, `hashed_emp_number`, to the target table, `employee`. The SQLite `hash_sha256(x)` function creates hashed values on the target for the source column, `emp_number`.  

```
  {
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-action": "add-column",
      "rule-target": "column",
      "object-locator": {
        "schema-name": "public",
        "table-name": "employee"
      },
      "value": "hashed_emp_number",
      "expression": "hash_sha256($emp_number)",
      "data-type": {
        "type": "string",
        "length": 64
      }
  }
```

## Adding metadata to a target table using expressions
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Expressions-Metadata"></a>

You can add the metadata information to the target table by using the expressions following:
+ `$AR_M_SOURCE_SCHEMA` – The name of the source schema.
+ `$AR_M_SOURCE_TABLE_NAME` – The name of the source table.
+ `$AR_M_SOURCE_COLUMN_NAME` – The name of a column in the source table.
+ `$AR_M_SOURCE_COLUMN_DATATYPE` – The data type of a column in the source table.

**Example of adding a column for a schema name using the schema name from the source**  
The example following adds a new column named `schema_name` to the target by using the schema name from the source.  

```
  {
      "rule-type": "transformation",
      "rule-id": "2",
      "rule-name": "2",
      "rule-action": "add-column",
      "rule-target": "column",
      "object-locator": {
        "schema-name": "%",
        "table-name": "%"
      },
      "rule-action": "add-column",
      "value":"schema_name",
      "expression": "$AR_M_SOURCE_SCHEMA", 
      "data-type": { 
         "type": "string",
         "length": 50
      }
  }
```

# Table and collection settings rules and operations
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings"></a>

Use table settings to specify any settings that you want to apply to a selected table or view for a specified operation. Table-settings rules are optional, depending on your endpoint and migration requirements. 

Instead of using tables and views, MongoDB and Amazon DocumentDB databases store data records as documents that are gathered together in *collections*. A single database for any MongoDB or Amazon DocumentDB endpoint is a specific set of collections identified by the database name. 

When migrating from a MongoDB or Amazon DocumentDB source, you work with parallel load settings slightly differently. In this case, consider the autosegmentation or range segmentation type of parallel load settings for selected collections rather than tables and views.

**Topics**
+ [

## Wildcards in table-settings are restricted
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.Wildcards)
+ [

## Using parallel load for selected tables, views, and collections
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.ParallelLoad)
+ [

## Specifying LOB settings for a selected table or view
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.LOB)
+ [

## Table-settings examples
](#CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.Examples)

For table-mapping rules that use the table-settings rule type, you can apply the following parameters. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.html)

## Wildcards in table-settings are restricted
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.Wildcards"></a>

Using the percent wildcard (`"%"`) in `"table-settings"` rules is not supported for source databases as shown following.

```
{
    "rule-type": "table-settings",
    "rule-id": "8",
    "rule-name": "8",
    "object-locator": {
        "schema-name": "ipipeline-prod",            
        "table-name": "%"
    },
    "parallel-load": {
        "type": "partitions-auto",
        "number-of-partitions": 16,
        "collection-count-from-metadata": "true",
        "max-records-skip-per-page": 1000000,
        "batch-size": 50000
    }
  }
```

If you use `"%"` in the `"table-settings"` rules as shown, then AWS DMS returns the exception following.

```
Error in mapping rules. Rule with ruleId = x failed validation. Exact 
schema and table name required when using table settings rule.
```

In addition, AWS recommends that you don't load a great number of large collections using a single task with `parallel-load`. Note that AWS DMS limits resource contention as well as the number of segments loaded in parallel by the value of the `MaxFullLoadSubTasks` task settings parameter, with a maximum value of 49. 

Instead, specify all collections for your source database for the largest collections by specifying each `"schema-name"` and `"table-name"` individually. Also, scale up your migration properly. For example, run multiple tasks across a sufficient number of replication instances to handle a great number of large collections in your database.

## Using parallel load for selected tables, views, and collections
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.ParallelLoad"></a>

To speed up migration and make it more efficient, you can use parallel load for selected relational tables, views, and collections. In other words, you can migrate a single segmented table, view, or collection using several threads in parallel. To do this, AWS DMS splits a full-load task into threads, with each table segment allocated to its own thread. 

Using this parallel-load process, you can first have multiple threads unload multiple tables, views, and collections in parallel from the source endpoint. You can then have multiple threads migrate and load the same tables, views, and collections in parallel to the target endpoint. For some database engines, you can segment the tables and views by existing partitions or subpartitions. For other database engines, you can have AWS DMS automatically segment collections according to specific parameters (autosegmentation). Otherwise, you can segment any table, view, or collection by ranges of column values that you specify.

Parallel load is supported for the following source endpoints:
+ Oracle
+ Microsoft SQL Server
+ MySQL
+ PostgreSQL
+ IBM Db2 LUW
+ SAP Adaptive Server Enterprise (ASE)
+ MongoDB (only supports the autosegmentation and range segmentation options of a parallel full load)
+ Amazon DocumentDB (only supports the autosegmentation and range segmentation options of a parallel full load)

For MongoDB and Amazon DocumentDB endpoints, AWS DMS supports the following data types for columns that are partition keys for the range segmentation option of a parallel full load.
+ Double
+ String
+ ObjectId
+ 32 bit integer
+ 64 bit integer

Parallel load for use with table-setting rules are supported for the following target endpoints:
+ Oracle
+ Microsoft SQL Server
+ MySQL
+ PostgreSQL
+ Amazon S3
+ SAP Adaptive Server Enterprise (ASE)
+ Amazon Redshift
+ MongoDB (only supports the autosegmentation and range segmentation options of a parallel full load)
+ Amazon DocumentDB (only supports the autosegmentation and range segmentation options of a parallel full load)
+ Db2 LUW

To specify the maximum number of tables and views to load in parallel, use the `MaxFullLoadSubTasks` task setting.

To specify the maximum number of threads per table or view for the supported targets of a parallel-load task, define more segments using column-value boundaries.

**Important**  
`MaxFullLoadSubTasks` controls the number of tables or table segments to load in parallel. `ParallelLoadThreads` controls the number of threads that are used by a migration task to execute the loads in parallel. *These settings are multiplicative*. As such, the total number of threads that are used during a full load task is approximately the result of the value of `ParallelLoadThreads `multiplied by the value of `MaxFullLoadSubTasks` (`ParallelLoadThreads` **\$1** `MaxFullLoadSubtasks)`.  
If you create tasks with a high number of Full Load sub tasks and a high number of parallel load threads, your task can consume too much memory and fail.

To specify the maximum number of threads per table for Amazon DynamoDB, Amazon Kinesis Data Streams, Apache Kafka, or Amazon Elasticsearch Service targets, use the `ParallelLoadThreads` target metadata task setting.

To specify the buffer size for a parallel load task when `ParallelLoadThreads` is used, use the `ParallelLoadBufferSize` target metadata task setting.

The availability and settings of `ParallelLoadThreads` and `ParallelLoadBufferSize` depend on the target endpoint. 

For more information about the `ParallelLoadThreads` and `ParallelLoadBufferSize` settings, see [Target metadata task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.md). For more information about the `MaxFullLoadSubTasks` setting, see [Full-load task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.md). For information specific to target endpoints, see the related topics.

To use parallel load, create a table-mapping rule of type `table-settings` with the `parallel-load` option. Within the `table-settings` rule, you can specify the segmentation criteria for a single table, view, or collection that you want to load in parallel. To do so, set the `type` parameter of the `parallel-load` option to one of several options. 

How to do this depends on how you want to segment the table, view, or collection for parallel load:
+ By partitions (or segments) – Load all existing table or view partitions (or segments) using the `partitions-auto` type. Or load only selected partitions using the `partitions-list` type with a specified partitions array.

  For MongoDB and Amazon DocumentDB endpoints only, load all or specified collections by segments that AWS DMS automatically calculates also using the `partitions-auto` type and additional optional `table-settings` parameters.
+ (Oracle endpoints only) By subpartitions – Load all existing table or view subpartitions using the `subpartitions-auto` type. Or load only selected subpartitions using the `partitions-list` type with a specified `subpartitions` array.
+ By segments that you define – Load table, view, or collection segments that you define by using column-value boundaries. To do so, use the `ranges` type with specified `columns` and `boundaries` arrays.
**Note**  
PostgreSQL endpoints support only this type of a parallel load. MongoDB and Amazon DocumentDB as a source endpoints support both this range segmentation type and the autosegmentation type of a parallel full load (`partitions-auto`).

To identify additional tables, views, or collections to load in parallel, specify additional `table-settings` objects with `parallel-load` options. 

In the following procedures, you can find out how to code JSON for each parallel-load type, from the simplest to the most complex.

**To specify all table, view, or collection partitions, or all table or view subpartitions**
+ Specify `parallel-load` with either the `partitions-auto` type or the `subpartitions-auto` type (but not both). 

  Every table, view, or collection partition (or segment) or subpartition is then automatically allocated to its own thread.

  For some endpoints, parallel load includes partitions or subpartitions only if they are already defined for the table or view. For MongoDB and Amazon DocumentDB source endpoints, you can have AWS DMS automatically calculate the partitions (or segments) based on optional additional parameters. These include `number-of-partitions`, `collection-count-from-metadata`, `max-records-skip-per-page`, and `batch-size`.

**To specify selected table or view partitions, subpartitions, or both**

1. Specify `parallel-load` with the `partitions-list` type.

1. (Optional) Include partitions by specifying an array of partition names as the value of `partitions`.

   Each specified partition is then allocated to its own thread.
**Important**  
For Oracle endpoints, make sure partitions and subpartitions aren't overlapping when choosing them for parallel load. If you use overlapping partitions and subpartitions to load data in parallel, it duplicates entries, or it fails due to a primary key duplicate violation. 

1. (Optional) , For Oracle endpoints only, include subpartitions by specifying an array of subpartition names as the value of `subpartitions`.

   Each specified subpartition is then allocated to its own thread.
**Note**  
Parallel load includes partitions or subpartitions only if they are already defined for the table or view.

You can specify table or view segments as ranges of column values. When you do so, be aware of these column characteristics:
+ Specifying indexed columns significantly improves performance.
+ You can specify up to 10 columns.
+ You can't use columns to define segment boundaries with the following AWS DMS data types: DOUBLE, FLOAT, BLOB, CLOB, and NCLOB
+ Records with null values aren't replicated.

**To specify table, view, or collection segments as ranges of column values**

1. Specify `parallel-load` with the `ranges` type.

1. Define a boundary between table or view segments by specifying an array of column names as the value of `columns`. Do this for every column for which you want to define a boundary between table or view segments. 

   The order of columns is significant. The first column is the most significant and the last column is least significant in defining each boundary, as described following.

1. Define the data ranges for all the table or view segments by specifying a boundary array as the value of `boundaries`. A *boundary array* is an array of column-value arrays. To do so, take the following steps:

   1. Specify each element of a column-value array as a value that corresponds to each column. A *column-value array* represents the upper boundary of each table or view segment that you want to define. Specify each column in the same order that you specified that column in the `columns` array.

      Enter values for DATE columns in the format supported by the source.

   1. Specify each column-value array as the upper boundary, in order, of each segment from the bottom to the next-to-top segment of the table or view. If any rows exist above the top boundary that you specify, these rows complete the top segment of the table or view. Thus, the number of range-based segments is potentially one more than the number of segment boundaries in the boundary array. Each such range-based segment is allocated to its own thread.

      All of the non-null data is replicated, even if you don't define data ranges for all of the columns in the table or view.

   For example, suppose that you define three column-value arrays for columns COL1, COL2, and COL3 as follows.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.html)

   You have defined three segment boundaries for a possible total of four segments.

   To identify the ranges of rows to replicate for each segment, the replication instance applies a search to these three columns for each of the four segments. The search is like the following:  
**Segment 1**  
Replicate all rows where the following is true: The first two-column values are less than or equal to their corresponding **Segment 1** upper boundary values. Also, the values of the third column are less than its **Segment 1** upper boundary value.  
**Segment 2**  
Replicate all rows (except **Segment 1** rows) where the following is true: The first two-column values are less than or equal to their corresponding **Segment 2** upper boundary values. Also, the values of the third column are less than its **Segment 2** upper boundary value.  
**Segment 3**  
Replicate all rows (except **Segment 2** rows) where the following is true: The first two-column values are less than or equal to their corresponding **Segment 3** upper boundary values. Also, the values of the third column are less than its **Segment 3** upper boundary value.  
**Segment 4**  
Replicate all remaining rows (except the **Segment 1, 2, and 3** rows).

   In this case, the replication instance creates a `WHERE` clause to load each segment as follows:  
**Segment 1**  
`((COL1 < 10) OR ((COL1 = 10) AND (COL2 < 30)) OR ((COL1 = 10) AND (COL2 = 30) AND (COL3 < 105)))`  
**Segment 2**  
`NOT ((COL1 < 10) OR ((COL1 = 10) AND (COL2 < 30)) OR ((COL1 = 10) AND (COL2 = 30) AND (COL3 < 105))) AND ((COL1 < 20) OR ((COL1 = 20) AND (COL2 < 20)) OR ((COL1 = 20) AND (COL2 = 20) AND (COL3 < 120)))`  
**Segment 3**  
`NOT ((COL1 < 20) OR ((COL1 = 20) AND (COL2 < 20)) OR ((COL1 = 20) AND (COL2 = 20) AND (COL3 < 120))) AND ((COL1 < 100) OR ((COL1 = 100) AND (COL2 < 12)) OR ((COL1 = 100) AND (COL2 = 12) AND (COL3 < 99)))`  
**Segment 4**  
`NOT ((COL1 < 100) OR ((COL1 = 100) AND (COL2 < 12)) OR ((COL1 = 100) AND (COL2 = 12) AND (COL3 < 99)))`

## Specifying LOB settings for a selected table or view
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.LOB"></a>

You can set task LOB settings for one or more tables by creating a table-mapping rule of type `table-settings` with the `lob-settings` option for one or more `table-settings` objects. 

Specifying LOB settings for selected tables or views is supported for the following source endpoints:
+ Oracle
+ Microsoft SQL Server
+ MySQL
+ PostgreSQL
+ IBM Db2, depending on the `mode` and `bulk-max-size` settings, described following
+ SAP Adaptive Server Enterprise (ASE), depending on the `mode` and `bulk-max-size` settings, as described following

Specifying LOB settings for selected tables or views is supported for the following target endpoints:
+ Oracle
+ Microsoft SQL Server
+ MySQL
+ PostgreSQL
+ SAP ASE, depending on the `mode` and `bulk-max-size` settings, as described following

**Note**  
You can use LOB data types only with tables and views that include a primary key.

To use LOB settings for a selected table or view, you create a table-mapping rule of type `table-settings` with the `lob-settings` option. Doing this specifies LOB handling for the table or view identified by the `object-locator` option. Within the `table-settings` rule, you can specify a `lob-settings` object with the following parameters:
+ `mode` – Specifies the mechanism for handling LOB migration for the selected table or view as follows: 
  + `limited` – The default limited LOB mode is the fastest and most efficient mode. Use this mode only if all of your LOBs are small (within 100 MB in size) or the target endpoint doesn't support an unlimited LOB size. Also if you use `limited`, all LOBs need to be within the size that you set for `bulk-max-size`. 

    In this mode for a full load task, the replication instance migrates all LOBs inline together with other column data types as part of main table or view storage. However, the instance truncates any migrated LOB larger than your `bulk-max-size` value to the specified size. For a change data capture (CDC) load task, the instance migrates all LOBs using a source table lookup, as in standard full LOB mode (see the following).
**Note**  
You can migrate views for full-load tasks only.
  + `unlimited` – The migration mechanism for full LOB mode depends on the value you set for `bulk-max-size` as follows:
    + **Standard full LOB mode** – When you set `bulk-max-size` to zero, the replication instance migrates all LOBs using standard full LOB mode. This mode requires a lookup in the source table or view to migrate every LOB, regardless of size. This approach typically results in a much slower migration than for limited LOB mode. Use this mode only if all or most of your LOBs are large (1 GB or larger).
    + **Combination full LOB mode** – When you set `bulk-max-size` to a nonzero value, this full LOB mode uses a combination of limited LOB mode and standard full LOB mode. That is for a full load task, if a LOB size is within your `bulk-max-size` value, the instance migrates the LOB inline as in limited LOB mode. If the LOB size is greater than this value, the instance migrates the LOB using a source table or view lookup as in standard full LOB mode. For a change data capture (CDC) load task, the instance migrates all LOBs using a source table lookup, as in standard full LOB mode (see the following). It does so regardless of LOB size.
**Note**  
You can migrate views for full-load tasks only.

      This mode results in a migration speed that is a compromise between the faster, limited LOB mode and the slower, standard full LOB mode. Use this mode only when you have a mix of small and large LOBs, and most of the LOBs are small.

      This combination full LOB mode is available only for the following endpoints:
      + IBM Db2 as source 
      + SAP ASE as source or target

    Regardless of the mechanism you specify for `unlimited` mode, the instance migrates all LOBs fully, without truncation.
  + `none` – The replication instance migrates LOBs in the selected table or view using your task LOB settings. Use this option to help compare migration results with and without LOB settings for the selected table or view.

  If the specified table or view has LOBs included in the replication, you can set the `BatchApplyEnabled` task setting to `true` only when using `limited` LOB mode. 

  In some cases, you might set `BatchApplyEnabled` to `true` and `BatchApplyPreserveTransaction` to `false`. In these cases, the instance sets `BatchApplyPreserveTransaction` to `true` if the table or view has LOBs and the source and target endpoints are Oracle.
+ `bulk-max-size` – Set this value to a zero or non-zero value in kilobytes, depending on the `mode` as described for the previous items. In `limited` mode, you must set a nonzero value for this parameter.

  The instance converts LOBs to binary format. Therefore, to specify the largest LOB you need to replicate, multiply its size by three. For example, if your largest LOB is 2 MB, set `bulk-max-size` to 6,000 (6 MB).

## Table-settings examples
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Tablesettings.Examples"></a>

Following, you can find some examples that demonstrate the use of table settings.

**Example Load a table segmented by partitions**  
The following example loads a `SALES` table in your source more efficiently by loading it in parallel based on all its partitions.  

```
{
   "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "HR",
                "table-name": "SALES"
            },
            "parallel-load": {
                "type": "partitions-auto"
            }
        }
     ]
}
```

**Example Load a table segmented by subpartitions**  
The following example loads a `SALES` table in your Oracle source more efficiently by loading it in parallel based on all its subpartitions.  

```
{
   "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "HR",
                "table-name": "SALES"
            },
            "parallel-load": {
                "type": "subpartitions-auto"
            }
        }
     ]
}
```

**Example Load a table segmented by a list of partitions**  
The following example loads a `SALES` table in your source by loading it in parallel by a particular list of partitions. Here, the specified partitions are named after values starting with portions of the alphabet, for example `ABCD`, `EFGH`, and so on.   

```
{
    "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "HR",
                "table-name": "SALES"
            },
            "parallel-load": {
                "type": "partitions-list",
                "partitions": [
                    "ABCD",
                    "EFGH",
                    "IJKL",
                    "MNOP",
                    "QRST",
                    "UVWXYZ"
                ]
            }
        }
    ]
}
```

**Example Load an Oracle table segmented by a selected list of partitions and subpartitions**  
The following example loads a `SALES` table in your Oracle source by loading it in parallel by a selected list of partitions and subpartitions. Here, the specified partitions are named after values starting with portions of the alphabet, for example `ABCD`, `EFGH`, and so on. The specified subpartitions are named after values starting with numerals, for example `01234` and `56789`.  

```
{
    "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "HR",
                "table-name": "SALES"
            },
            "parallel-load": {
                "type": "partitions-list",
                "partitions": [
                    "ABCD",
                    "EFGH",
                    "IJKL",
                    "MNOP",
                    "QRST",
                    "UVWXYZ"
                ],
                "subpartitions": [
                    "01234",
                    "56789"
                ]
            }
        }
    ]
}
```

**Example Load a table segmented by ranges of column values**  
The following example loads a `SALES` table in your source by loading it in parallel by segments specified by ranges of the `SALES_NO` and `REGION` column values.  

```
{
    "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "HR",
                "table-name": "SALES"
            },
            "parallel-load": {
                "type": "ranges",
                "columns": [
                    "SALES_NO",
                    "REGION"
                ],
                "boundaries": [
                    [
                        "1000",
                        "NORTH"
                    ],
                    [
                        "3000",
                        "WEST"
                    ]
                ]
            }
        }
    ]
}
```
Here, two columns are specified for the segment ranges with the names, `SALES_NO` and `REGION`. Two boundaries are specified with two sets of column values (`["1000","NORTH"]` and `["3000","WEST"]`).  
These two boundaries thus identify the following three table segments to load in parallel:    
Segment 1  
Rows with `SALES_NO` less than or equal to 1,000 and `REGION` less than "NORTH". In other words, sales numbers up to 1,000 in the EAST region.  
Segment 2  
Rows other than **Segment 1** with `SALES_NO` less than or equal to 3,000 and `REGION` less than "WEST". In other words, sales numbers over 1,000 up to 3,000 in the NORTH and SOUTH regions.  
Segment 3  
All remaining rows other than **Segment 1** and **Segment 2**. In other words, sales numbers over 3,000 in the "WEST" region.

**Example Load two tables: One segmented by ranges and another segmented by partitions**  
The following example loads a `SALES` table in parallel by segment boundaries that you identify. It also loads an `ORDERS` table in parallel by all of its partitions, as with previous examples.  

```
{
    "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "HR",
                "table-name": "SALES"
            },
            "parallel-load": {
                "type": "ranges",
                "columns": [
                    "SALES_NO",
                    "REGION"
                ],
                "boundaries": [
                    [
                        "1000",
                        "NORTH"
                    ],
                    [
                        "3000",
                        "WEST"
                    ]
                ]
            }
        },
        {
            "rule-type": "table-settings",
            "rule-id": "3",
            "rule-name": "3",
            "object-locator": {
                "schema-name": "HR",
                "table-name": "ORDERS"
            },
            "parallel-load": {
                "type": "partitions-auto" 
            }
        }
    ]
}
```

**Example Load a table with LOBs using the task LOB settings**  
The following example loads an `ITEMS` table in your source, including all LOBs, using its task LOB settings. The `bulk-max-size` setting of 100 MB is ignored and left only for a quick reset to `limited` or `unlimited` mode.  

```
{
   "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "INV",
                "table-name": "ITEMS"
            },
            "lob-settings": {
                "mode": "none",
                "bulk-max-size": "100000"
            }
        }
     ]
}
```

**Example Load a table with LOBs using limited LOB mode**  
The following example loads an `ITEMS` table including LOBs in your source using limited LOB mode (the default) with a maximum nontruncated size of 100 MB. Any LOBs that are larger than this size are truncated to 100 MB. All LOBs are loaded inline with all other column data types.  

```
{
   "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "INV",
                "table-name": "ITEMS"
            },
            "lob-settings": {
                "bulk-max-size": "100000"
            }
        }
     ]
}
```

**Example Load a table with LOBs using standard full LOB mode**  
The following example loads an `ITEMS` table in your source, including all its LOBs without truncation, using standard full LOB mode. All LOBs, regardless of size, are loaded separately from other data types using a lookup for each LOB in the source table.  

```
{
   "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "INV",
                "table-name": "ITEMS"
            },
            "lob-settings": {
                "mode": "unlimited",
                "bulk-max-size": "0"
            }
        }
     ]
}
```

**Example Load a table with LOBs using combination full LOB mode**  
The following example loads an `ITEMS` table in your source, including all its LOBs without truncation, using combination full LOB mode. All LOBs within 100 MB in size are loaded inline along with other data types, as in limited LOB mode. All LOBs over 100 MB in size are loaded separately from other data types. This separate load uses a lookup for each such LOB in the source table, as in standard full LOB mode.  

```
{
   "rules": [{
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "%",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "table-settings",
            "rule-id": "2",
            "rule-name": "2",
            "object-locator": {
                "schema-name": "INV",
                "table-name": "ITEMS"
            },
            "lob-settings": {
                "mode": "unlimited",
                "bulk-max-size": "100000"
            }
        }
     ]
}
```

# Using data masking to hide sensitive information
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Masking"></a>

To conceal sensitive data stored in one or more columns of the tables being migrated, you can leverage Data Masking transformation rule actions. Starting from version 3.5.4, AWS DMS allows the use of data masking transformation rule actions in table mapping, enabling you to alter the contents of one or more columns during the migration process. AWS DMS loads the modified data into the target tables.

AWS Database Migration Service provides three options for data masking transformation rule actions:
+ Data Masking: Digits Mask
+ Data Masking: Digits Randomize
+ Data Masking: Hashing Mask

These data masking transformation rule actions can be configured in the table mapping of your replication task, similar to other transformation rules. The rule target should be set to the column level.

## Masking numbers in column data with a masking character
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Masking.Numbers"></a>

The "Data Masking: Digits Mask" transformation rule action allows you to mask numerical data in one or more columns by replacing digits with a single ASCII printable character that you specify (excluding empty or whitespace characters).

Here's an example that masks all digits in the `cust_passport_no` column of the `customer_master` table with the masking character `'#'` and loads the masked data into the target table:

```
                {
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "cust_schema",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-target": "column",
            "object-locator": {
                "schema-name": "cust_schema",
                "table-name": "customer_master",
                "column-name": "cust_passport_no"
            },
            "rule-action": "data-masking-digits-mask",
            "value": "#"
        }
    ]
}
```

For example, if the column `cust_passport_no` in the source table contains the record "C6BGJ566669K", the AWS DMS task will write this data to the target table as `"C#BGJ######K"`.

## Replacing numbers in the column with random numbers
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Masking.Random"></a>

The transformation rule "Data Masking: Digits Randomize" allows you to replace each numerical digit in one or more columns with a random number. In the following example, AWS DMS replaces every digit in the `cust_passport_no` column of the source table `customer_master` with a random number and writes the modified data to the target table:

```
            {
    "rules": [
        {
            "rule-type": "selection",
            "rule-id": "1",
            "rule-name": "1",
            "object-locator": {
                "schema-name": "cust_schema",
                "table-name": "%"
            },
            "rule-action": "include"
        },
        {
            "rule-type": "transformation",
            "rule-id": "2",
            "rule-name": "2",
            "rule-target": "column",
            "object-locator": {
                "schema-name": "cust_schema",
                "table-name": "customer_master",
                "column-name": "cust_passport_no"
            },
            "rule-action": "data-masking-digits-randomize"
        }
    ]
}
```

For example, the AWS DMS task will transform the value `"C6BGJ566669K"` in the `cust_passport_no` column of the source table to `"C1BGJ842170K"` and write it to the target database.

## Replacing column data with hash value
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Masking.Hash"></a>

The transformation rule "Data Masking: Hashing Mask" allows you to replace the column data with a hash generated using the `SHA256` algorithm. The length of the hash will always be 64 characters, hence the target table column length should be 64 characters at minimum. Alternatively, you can add a `change-data-type` transformation rule action to the column to increase the width of the column in the target table.

The following example generates a 64-character long hash value for the data in the `cust_passport_no` column of the source table `customer_master` and loads the transformed data to the target table after increasing the column length:

```
{
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "1",
"object-locator": {
"schema-name": "cust_schema",
"table-name": "%"
},
"rule-action": "include"
},
{
"rule-type": "transformation",
"rule-id": "2",
"rule-name": "2",
"rule-target": "column",
"object-locator": {
"schema-name": "cust_schema",
"table-name": "customer_master",
"column-name": "cust_passport_no"
},
"rule-action": "change-data-type",
"data-type": {
"type": "string",
"length": "100",
"scale": ""
}
},
{
"rule-type": "transformation",
"rule-id": "3",
"rule-name": "3",
"rule-target": "column",
"object-locator": {
"schema-name": "cust_schema",
"table-name": "customer_master",
"column-name": "cust_passport_no"
},
"rule-action": "data-masking-hash-mask"
}
]
}
```

For example, if the column `cust_passport_no` of the source table contains value `“C6BGJ566669K”`, AWS DMS task will write a hash `“7CB06784764C9030CCC41E25C15339FEB293FFE9B329A72B5FED564E99900C75”` to the target table.

## Limitations
<a name="CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Masking.Limitations"></a>
+ Each Data Masking transformation rule option is supported for specific AWS DMS data types only:
  + Data Masking: Digits Mask is supported for columns of data types: `WSTRING` and `STRING`.
  + Data Masking: Digits Randomize is supported for columns of data types: `WSTRING, STRING; NUMERIC, INT1, INT2, INT4, and INT8 ` with unsigned counterparts.
  + Data Masking: Hashing Mask is supported for columns of data types: `WSTRING` and `STRING`.

  To learn more about the mapping of AWS DMS data types to your source engine's data types, refer to the data type mapping of your source engine with AWS DMS data types. See source data types for [Source data types for Oracle](CHAP_Source.Oracle.md#CHAP_Source.Oracle.DataTypes), [Source data types for SQL Server](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.DataTypes), [Source data types for PostgreSQL](CHAP_Source.PostgreSQL.md#CHAP_Source-PostgreSQL-DataTypes), and [Source data types for MySQL](CHAP_Source.MySQL.md#CHAP_Source.MySQL.DataTypes).
+ Using a Data Masking rule action for a column with an incompatible data type will cause an error in the DMS task. Refer to DataMaskingErrorPolicy in DMS task settings to specify the error handling behavior. For more information about `DataMaskingErrorPolicy`, see [Error handling task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.ErrorHandling.md).
+ You may add a change-data-type transformation rule action to change the data type of the column to a compatible type if your source column type is not supported for the masking option you plan to use. The `rule-id` of the `change-data-type` transformation should be a smaller number than the rule-id of the masking transformation so that the data type change happens before masking.
+ Use Data Masking: Hashing Mask action for masking Primary Key/ Unique Key/ Foreign Key columns, as the generated hash value will be unique and consistent. Other two masking options cannot guarantee uniqueness.
+ While Data Masking: Digits Mask and Data Masking: Digits Randomize affect only the digits in the column data and does not affect the length of data, Data Masking: Hashing Mask modifies the entire column, length of data changes to 64 characters. Hence, the target table to be created accordingly or a change-data-type transformation rule should be added for the column which is being masked.
+ Columns with Data Masking transformation rule action specified are excluded from data validation in AWS DMS. If the Primary Key/ Unique Key columns are masked, data validation will not be run for this table; validation status of such table will be equal to `No Primary key`.

# Using source filters
<a name="CHAP_Tasks.CustomizingTasks.Filters"></a>

You can use source filters to limit the number and type of records transferred from your source to your target. For example, you can specify that only employees with a location of headquarters are moved to the target database. Filters are part of a selection rule. You apply filters on a column of data. 

Source filters must follow these constraints:
+ A selection rule can have no filters or one or more filters.
+ Every filter can have one or more filter conditions.
+ If more than one filter is used, the list of filters is combined as if using an AND operator between the filters.
+ If more than one filter condition is used within a single filter, the list of filter conditions is combined as if using an OR operator between the filter conditions.
+ Filters are only applied when `rule-action = 'include'`.
+ Filters require a column name and a list of filter conditions. Filter conditions must have a filter operator that is associated with either one value, two values, or no value, depending on the operator.
+ Column names, table names, view names, and schema names are case-sensitive. Oracle and Db2 should always use UPPER case.
+ Filters only support tables with exact names. Filters do not support wildcards.

The following limitations apply to using source filters:
+ Filters don't calculate columns of right-to-left languages.
+ Don't apply filters to LOB columns.
+ Apply filters only to *immutable* columns, which aren't updated after creation. If source filters are applied to *mutable* columns, which can be updated after creation, adverse behavior can result. 

  For example, a filter to exclude or include specific rows in a column always excludes or includes the specified rows even if the rows are later changed. Suppose that you exclude or include rows 1–10 in column A, and they later change to become rows 11–20. In this case, they continue to be excluded or included even when the data is no longer the same.

  Similarly, suppose that a row outside of the filter's scope is later updated (or updated and deleted), and should then be excluded or included as defined by the filter. In this case, it's replicated at the target.

The following additional concerns apply when using source filters:
+ We recommend that you create an index using the columns included in the filtering definition and the primary key.

## Creating source filter rules in JSON
<a name="CHAP_Tasks.CustomizingTasks.Filters.Applying"></a>

You can create source filters using the JSON `filters` parameter of a selection rule. The `filters` parameter specifies an array of one or more JSON objects. Each object has parameters that specify the source filter type, column name, and filter conditions. These filter conditions include one or more filter operators and filter values. 

The following table shows the parameters for specifying source filtering in a `filters` object.


|  Parameter  |  Value  | 
| --- | --- | 
|   `filter-type`   | source | 
|  `column-name`  |  A parameter with the name of the source column to which you want the filter applied. The name is case-sensitive.  | 
|  `filter-conditions`  | An array of one or more objects containing a filter-operator parameter and zero or more associated value parameters, depending on the filter-operator value. | 
|  `filter-operator`  |  A parameter with one of the following values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.Filters.html)  | 
|  `value` or `start-value` and `end-value` or no values  |  Zero or more value parameters associated with `filter-operator`: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.Filters.html)  | 

The following examples show some common ways to use source filters.

**Example Single filter**  
The following filter replicates all employees where `empid >= 100` to the target database.  

```
 {
     "rules": [{
         "rule-type": "selection",
         "rule-id": "1",
         "rule-name": "1",
         "object-locator": {
             "schema-name": "test",
             "table-name": "employee"
         },
         "rule-action": "include",
         "filters": [{
             "filter-type": "source",
             "column-name": "empid",
             "filter-conditions": [{
                "filter-operator": "gte",
                "value": "100"
             }]
         }]
     }]
 }
```

**Example Multiple filter operators**  
The following filter applies multiple filter operators to a single column of data. The filter replicates all employees where `(empid <= 10)` OR `(empid is between 50 and 75)` OR `(empid >= 100)` to the target database.   

```
{
    "rules": [{
        "rule-type": "selection",
        "rule-id": "1",
        "rule-name": "1",
        "object-locator": {
            "schema-name": "test",
            "table-name": "employee"
        },
        "rule-action": "include",
        "filters": [{
            "filter-type": "source",
            "column-name": "empid",
            "filter-conditions": [{
                "filter-operator": "lte",
                "value": "10"
            }, {
                "filter-operator": "between",
                "start-value": "50",
                "end-value": "75"
            }, {
                "filter-operator": "gte",
                "value": "100"
            }]
        }]
    }]
}
```

**Example Multiple filters**  
The following filters apply multiple filters to two columns in a table. The filter replicates all employees where `(empid <= 100)` AND `(dept = tech)` to the target database.   

```
{
    "rules": [{
        "rule-type": "selection",
        "rule-id": "1",
        "rule-name": "1",
        "object-locator": {
            "schema-name": "test",
            "table-name": "employee"
        },
        "rule-action": "include",
        "filters": [{
            "filter-type": "source",
            "column-name": "empid",
            "filter-conditions": [{
                "filter-operator": "lte",
                "value": "100"
            }]
        }, {
            "filter-type": "source",
            "column-name": "dept",
            "filter-conditions": [{
                "filter-operator": "eq",
                "value": "tech"
            }]
        }]
    }]
}
```

**Example Filtering NULL values**  
The following filter shows how to filter on empty values. It replicates all employees where `dept = NULL` to the target database.  

```
{
    "rules": [{
        "rule-type": "selection",
        "rule-id": "1",
        "rule-name": "1",
        "object-locator": {
            "schema-name": "test",
            "table-name": "employee"
        },
        "rule-action": "include",
        "filters": [{
            "filter-type": "source",
            "column-name": "dept",
            "filter-conditions": [{
                "filter-operator": "null"
            }]
        }]
    }]
}
```

**Example Filtering using NOT operators**  
Some of the operators can be used in the negative form. The following filter replicates all employees where `(empid is < 50) OR (empid is > 75)` to the target database.  

```
{
    "rules": [{
        "rule-type": "selection",
        "rule-id": "1",
        "rule-name": "1",
        "object-locator": {
            "schema-name": "test",
            "table-name": "employee"
        },
        "rule-action": "include",
        "filters": [{
            "filter-type": "source",
            "column-name": "empid",
            "filter-conditions": [{
                "filter-operator": "notbetween",
                "start-value": "50",
                "end-value": "75"
            }]
        }]
    }]
}
```

**Example Using Mixed filters operators**  
Start with AWS DMS version 3.5.0, you can mix inclusive operators and negative operators.   
The following filter replicates all employees where `(empid != 50) AND (dept is not NULL)` to the target database.  

```
{
    "rules": [{
        "rule-type": "selection",
        "rule-id": "1",
        "rule-name": "1",
        "object-locator": {
            "schema-name": "test",
            "table-name": "employee"
        },
        "rule-action": "include",
        "filters": [{
            "filter-type": "source",
            "column-name": "empid",
            "filter-conditions": [{
                "filter-operator": "noteq",
                "value": "50"
            }]
        }, {
            "filter-type": "source",
            "column-name": "dept",
            "filter-conditions": [{
                "filter-operator": "notnull"
            }]
        }]
    }]
}
```

Note the following when using `null` with other filter operators:
+ Using inclusive, negative and `null` filter conditions together within the same filter will not replicate records with `NULL` values.
+ Using negative and `null` filter conditions together without inclusive filter conditions within the same filter will not replicate any data.
+ Using negative filter conditions without a `null` filter condition set explicitly will not replicate records with `NULL` values.

## Filtering by time and date
<a name="CHAP_Tasks.CustomizingTasks.Filters.Dates"></a>

When selecting data to import, you can specify a date or time as part of your filter criteria. AWS DMS uses the date format YYYY-MM-DD and the time format YYYY-MM-DD HH:MM:SS.SSS for filtering. The AWS DMS comparison functions follow the SQLite conventions. For more information about SQLite data types and date comparisons, see [Datatypes in SQLite version 3](https://sqlite.org/datatype3.html) in the SQLite documentation.

The following filter shows how to filter on a date. It replicates all employees where `empstartdate >= January 1, 2002` to the target database.

**Example Single date filter**  

```
{
    "rules": [{
        "rule-type": "selection",
        "rule-id": "1",
        "rule-name": "1",
        "object-locator": {
            "schema-name": "test",
            "table-name": "employee"
        },
        "rule-action": "include",
        "filters": [{
            "filter-type": "source",
            "column-name": "empstartdate",
            "filter-conditions": [{
                "filter-operator": "gte",
                "value": "2002-01-01"
            }]
        }]
    }]
}
```

# Enabling and working with premigration assessments for a task
<a name="CHAP_Tasks.AssessmentReport"></a>

A premigration assessment evaluates specified components of a database migration task to help identify any problems that might prevent a migration task from running as expected. This assessment gives you a chance to identify and fix issues before you run a new or modified task. This allows you to avoid delays related to task failures caused by missing requirements or known limitations.

AWS DMS provides access to two different options for premigration assessments:
+ **Data type assessment**: A legacy report that provides a limited scope of assessments.
+ **Premigration assessment run**: Contains various types of individual assessments, including data type assessment results.

**Note**  
If you choose a premigration assessment run, you don't need to choose a data type assessment separately.

 These options are described in the following topics:
+ [Specifying, starting, and viewing premigration assessment runs](CHAP_Tasks.PremigrationAssessmentRuns.md): A premigration (recommended) assessment run specifies one or more individual assessments to run based on a new or existing migration task configuration. Each individual assessment evaluates a specific element of a supported source and/or target database from the perspective of criteria such as the migration type, supported objects, index configuration, and other task settings, such as table mappings that identify the schemas and tables to migrate. 

  For example, an individual assessment might evaluate what source data types or primary key formats can or can't be migrated, possibly based on the AWS DMS engine version. You can start and view the results of the latest assessment run and view the results of all prior assessment runs for a task either using the AWS DMS Management Console or using the AWS CLI and SDKs to access the AWS DMS API. You can also view the results of prior assessment runs for a task in an Amazon S3 bucket that you have selected for AWS DMS to store these results.
**Note**  
The number and types of available individual assessments can increase over time. For more information about periodic updates, see [Specifying individual assessments](CHAP_Tasks.PremigrationAssessmentRuns.md#CHAP_Tasks.PremigrationAssessmentRuns.Individual). 
+ [Starting and viewing data type assessments (Legacy)](CHAP_Tasks.DataTypeAssessments.md): A data type (legacy) assessment returns the results of a single type of premigration assessment in a single JSON structure: the data types that might not be migrated correctly in a supported relational source database instance. This report returns the results for all problematic data types found in every schema and table in the source database that is selected for migration. 

# Creating prerequisites for premigration assessments
<a name="CHAP_Tasks.AssessmentReport.Prerequisites"></a>

This section describes the Amazon S3 and IAM resources you need to create a premigration assessment.

**Important**  
 The following prerequisites are only required if you supply your own Amazon S3 bucket and IAM role. 

## Create an S3 bucket
<a name="CHAP_Tasks.AssessmentReport.Prerequisites.S3"></a>

AWS DMS stores premigration assessment reports in an S3 bucket. To create the S3 bucket, do the following:

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose **Create bucket**.

1. On the **Create bucket** page, enter a globally unique name that includes your sign-in name for the bucket, such as dms-bucket-*yoursignin*.

1. Choose the AWS Region for the DMS migration task.

1. Leave the remaining settings as they are, and choose **Create bucket**.

## Create IAM resources
<a name="CHAP_Tasks.AssessmentReport.Prerequisites.IAM"></a>

DMS uses an IAM role and policy to access the S3 bucket to store premigration assessment results.

To create the IAM policy, do the following:

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. In the **Create policy** page, choose the **JSON** tab.

1. Paste the following JSON code into the editor, replacing the example code. Replace *amzn-s3-demo-bucket* with the name of the Amazon S3 bucket that you created in the previous section.

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
         {
            "Effect":"Allow",
            "Action":[
               "s3:PutObject",
               "s3:DeleteObject",
               "s3:GetObject",
               "s3:PutObjectTagging"
            ],
            "Resource":[
               "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ]
         },
         {
            "Effect":"Allow",
            "Action":[
               "s3:ListBucket",
               "s3:GetBucketLocation"
            ],
            "Resource":[
               "arn:aws:s3:::amzn-s3-demo-bucket"
            ]
         }
      ]
   }
   ```

------

1. Choose **Next: Tags**, then choose and **Next: Review**.

1. Enter **DMSPremigrationAssessmentS3Policy** for **Name\$1**, and then choose **Create policy**.

To create the IAM role, do the following:

1. In the IAM console, in the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. On the **Select trusted entity** page, for **Trusted entity type**, choose **AWS Service**. For **Use cases for other AWS services**, choose **DMS.**

1. Check the **DMS** check box, and then choose **Next.**

1. On the **Add permissions** page, choose **DMSPremigrationAssessmentS3Policy**. Choose **Next**.

1. On the **Name, review, and create** page, enter **DMSPremigrationAssessmentS3Role** for **Role name**, then choose **Create role**.

1. On the **Roles** page, enter **DMSPremigrationAssessmentS3Role** for **Role name**. Choose **DMSPremigrationAssessmentS3Role**.

1. On the **DMSPremigrationAssessmentS3Role** page, choose the **Trust relationships** tab. Choose **Edit trust policy**.

1. On the **Edit trust policy** page, paste the following JSON into the editor, replacing the existing text.

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
         {
            "Sid":"",
            "Effect":"Allow",
            "Principal":{
               "Service":"dms.amazonaws.com"
            },
            "Action":"sts:AssumeRole"
         }
      ]
   }
   ```

------

   This policy grants the `sts:AssumeRole` permission to DMS to put the premigration assessment run results into the S3 bucket.

1. Choose **Update policy**.

# Specifying, starting, and viewing premigration assessment runs
<a name="CHAP_Tasks.PremigrationAssessmentRuns"></a>

A premigration assessment specifies one or more individual assessments to run based on a new or existing migration task configuration. Each individual assessment evaluates a specific element of the source or target database depending on considerations such as the migration type, supported objects, index configuration, and other task settings, such as table mappings to identify the schemas and tables to migrate. For example, an individual assessment might evaluate what source data types or primary key formats can and cannot be migrated.

## Specifying individual assessments
<a name="CHAP_Tasks.PremigrationAssessmentRuns.Individual"></a>

When creating a new assessment run, you can choose to run some or all of the individual assessments that are applicable to your task configuration.

AWS DMS supports premigration assessment runs for the following relational source and target database engines:
+ [Oracle assessments](CHAP_Tasks.AssessmentReport.Oracle.md) 
+ [Sql Server assessmentsCheck if the DMS user has the VIEW SERVER STATE permission.](CHAP_Tasks.AssessmentReport.SqlServer.md) 
+ [MySQL assessments](CHAP_Tasks.AssessmentReport.MySQL.md) (includes MariaDB and Amazon Aurora MySQL-Compatible Edition)
+ [PostgreSQL assessmentsValidate the source database parameter `max_slot_wal_keep_size`](CHAP_Tasks.AssessmentReport.PG.md) (includes Amazon Aurora PostgreSQL-Compatible Edition)
+ [MariaDB assessments](CHAP_Tasks.AssessmentReport.MariaDB.md)
+ [Db2 LUW Assessments](CHAP_Tasks.AssessmentReport.Db2.md)

## Starting and viewing premigration assessment runs
<a name="CHAP_Tasks.PremigrationAssessmentRuns.AssessmentRun"></a>

You can start a premigration assessment run for a new or existing migration task using the AWS DMS Management Console, the AWS CLI, and the AWS DMS API.

**To start a premigration assessment run for a new or existing task**

1. From the **Database migration tasks** page in the AWS DMS Management Console, do one of the following:
   + To create a new task and assess it, choose **Create task**. The **Create database migration task page** opens:

     1. Enter the task settings required to create your task, including table mapping.

     1. In the **Premigration assessment** section, the **Premigration assessment run** checkbox is checked. This page contains the options to specify an assessment run for the new task.
**Note**  
When creating a new task, enabling a premigration assessment run disables the option to start the task automatically on task creation. You can start the task manually after the assessment run completes.
   + To assess an existing task, choose the **Identifier** for an existing task on the **Database migration tasks** page. The task page for the chosen existing task opens:

     1. Choose **Actions** and select **Create premigration assessment**. A **Create premigration assessment** page opens with options to specify an assessment run for the existing task. 

1. Enter a unique name for your assessment run, or leave the default value.

1. Select the available individual assessments that you want to include in this assessment run. You can only select the available individual assessments based on your current task settings. By default, all available individual assessments are enabled and selected.

1. Search for and choose an Amazon S3 bucket and folder in your account to store your assessment result report. For information about setting up resources for assessment runs, see [Creating prerequisites for premigration assessments](CHAP_Tasks.AssessmentReport.Prerequisites.md).

1. Select or enter an IAM role with full account access to your chosen Amazon S3 bucket and folder. For information about setting up resources for assessment runs, see [Creating prerequisites for premigration assessments](CHAP_Tasks.AssessmentReport.Prerequisites.md).

1. Optionally choose a setting to encrypt the assessment result report in your Amazon S3 bucket. For information about S3 bucket encryption, see [ Setting default server-side encryption behavior for Amazon S3 buckets ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html).

1. Choose **Create task** for a new task or choose **Create** for an existing task.

   The **Database migration tasks** page opens listing your new or modified task with a **Status** of **Creating...** and a banner message indicating that your premigration assessment run will start once the task is created.

AWS DMS provides access to the latest and all prior premigration assessment runs using the AWS DMS Management Console, the AWS CLI, or the AWS DMS API. 

**To view results for the assessment run**

1. From the AWS DMS Management Console, choose the **Identifier** for your existing task on the **Database migration tasks** page. The task page for the existing task opens.

1. Choose the **Premigration assessments** tab on the existing task page. This opens a **Premigration assessments** section on that page showing results of the the assessment runs, listed by name, in reverse chronological order. The latest result appears at the top of the list. Choose the name of the assessment run whose results you want to view.

These assessment run results start with the name of the latest assessment run and an overview of its status followed by a listing of the specified individual assessments and their status. You can then explore details of the status of each individual assessment by choosing its name in the list, with results available down to the table column level.

Both the status overview for an assessment run and each individual assessment shows a **Status** value. This value indicates the overall status of the assessment run and a similar status for each individual assessment. Following is a list of the **Status** values for the assessment run:
+ `"cancelling"` – The assessment run was cancelled.
+ `"deleting"` – The assessment run was deleted.
+ `"failed"` – At least one individual assessment completed with a `failed` status. This status takes priority over all other statuses, including error conditions.
+ `"error-provisioning"` – An internal error occurred while resources were provisioned (during `provisioning` status). This status is only assigned when no individual assessments have a failed status, as provisioning errors may have prevented assessments from running that could have resulted in failed validations.
+ `"error-executing"` – An internal error occurred while individual assessments ran (during `running` status). This status is only assigned when no individual assessments have a failed status, as error conditions may have prevented assessments from completing that could have resulted in failed validations.
+ `"invalid state"` – The assessment run is in an unknown state.
+ `"passed"` – All individual assessments have completedsuccessfully with no failed, warning, or error statuses.
+ `"provisioning"` – Resources required to run individual assessments are being provisioned.
+ `"running"` – Individual assessments are being ran.
+ `"starting"` – The assessment run is starting, but resources are not yet being provisioned for individual assessments.
+ `"warning"` – At least one individual assessment completed with a `warning` status, and no assessments have failed or error statuses.

Following is a list of the **Status** values for each individual assessment of the assessment run:
+ `"cancelled"` – The individual assessment was cancelled as part of cancelling the assessment run.
+ `"error"` – The individual assessment did not complete successfully.
+ `"failed"` – The individual assessment completed successfully with a failed validation result: view the details of the result for more information.
+ `"invalid state"` – The individual assessment is in an unknown state.
+ `"passed"` – The individual assessment completed with a successful validation result.
+ `"pending"` – The individual assessment is waiting to run.
+ `"running"` – The individual assessment is running.
+ `"warning"` – The individual assessment completed with a warning status.
+ `"skipped"` – The individual assessment were skipped during the assessment run.

You can also view the JSON files for the assessment run results on Amazon S3.

**To view the JSON files for the assessment run on Amazon S3**

1. From the AWS DMS Management Console, choose the Amazon S3 bucket link shown in the status overview of the assessment run. This displays a list of bucket folders and other Amazon S3 objects stored in the bucket. If your results are stored in a bucket folder, open the folder.

1. You can find your assessment run results in several JSON files. A `summary.json` file contains the overall results of the assessment run. The remaining files are each named for an individual assessment that was specified for the assessment run, such as `unsupported-data-types-in-source.json`. These files each contain the results for the corresponding individual assessment from the chosen assessment run.

To start and view the results of premigration assessment runs for an existing migration task, you can run the following CLI commands and AWS DMS API operations:
+ CLI: [https://docs.aws.amazon.com/cli/latest/reference/dms/describe-applicable-individual-assessments](https://docs.aws.amazon.com/cli/latest/reference/dms/describe-applicable-individual-assessments), API: [https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeApplicableIndividualAssessments.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeApplicableIndividualAssessments.html) – Provides a list of individual assessments that you can specify for a new premigration assessment run, given one or more task configuration parameters.
+ CLI: [https://docs.aws.amazon.com/cli/latest/reference/dms/start-replication-task-assessment-run](https://docs.aws.amazon.com/cli/latest/reference/dms/start-replication-task-assessment-run), API: [https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessmentRun.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessmentRun.html) – Starts a new premigration assessment run for one or more individual assessments of an existing migration task.
+ CLI: [https://docs.aws.amazon.com/cli/latest/reference/dms/describe-replication-task-assessment-runs](https://docs.aws.amazon.com/cli/latest/reference/dms/describe-replication-task-assessment-runs), API: [https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeReplicationTaskAssessmentRuns.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeReplicationTaskAssessmentRuns.html) – Returns a paginated list of premigration assessment runs based on filter settings.
+ CLI: [https://docs.aws.amazon.com/cli/latest/reference/dms/describe-replication-task-individual-assessments](https://docs.aws.amazon.com/cli/latest/reference/dms/describe-replication-task-individual-assessments), API: [https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeReplicationTaskIndividualAssessments.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeReplicationTaskIndividualAssessments.html) – Returns a paginated list of individual assessments based on filter settings.
+ CLI: [https://docs.aws.amazon.com/cli/latest/reference/dms/cancel-replication-task-assessment-run](https://docs.aws.amazon.com/cli/latest/reference/dms/cancel-replication-task-assessment-run), API: [https://docs.aws.amazon.com/dms/latest/APIReference/API_CancelReplicationTaskAssessmentRun.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_CancelReplicationTaskAssessmentRun.html) – Cancels, but doesn't delete, a single premigration assessment run.
+ CLI: [https://docs.aws.amazon.com/cli/latest/reference/dms/delete-replication-task-assessment-run](https://docs.aws.amazon.com/cli/latest/reference/dms/delete-replication-task-assessment-run), API: [https://docs.aws.amazon.com/dms/latest/APIReference/API_DeleteReplicationTaskAssessmentRun.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_DeleteReplicationTaskAssessmentRun.html) – Deletes the record of a single premigration assessment run.

# Individual assessments
<a name="CHAP_Tasks.AssessmentReport.Assessments"></a>

This section describes individual premigration assessments.

To create an individual premigration assessment using the AWS DMS API, use the listed API key for the `IncludeOnly` parameter of the [ StartReplicationTaskAssessmentRun](https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessmentRun.html) action.

**Topics**
+ [

# Assessments for all endpoint types
](CHAP_Tasks.AssessmentReport.Assessments.All.md)
+ [

# Oracle assessments
](CHAP_Tasks.AssessmentReport.Oracle.md)
+ [

# Sql Server assessments
](CHAP_Tasks.AssessmentReport.SqlServer.md)
+ [

# MySQL assessments
](CHAP_Tasks.AssessmentReport.MySQL.md)
+ [

# MariaDB assessments
](CHAP_Tasks.AssessmentReport.MariaDB.md)
+ [

# PostgreSQL assessments
](CHAP_Tasks.AssessmentReport.PG.md)
+ [

# Db2 LUW Assessments
](CHAP_Tasks.AssessmentReport.Db2.md)

# Assessments for all endpoint types
<a name="CHAP_Tasks.AssessmentReport.Assessments.All"></a>

This section describes individual premigration assessments for all endpoint types.

**Topics**
+ [

## Unsupported data types
](#CHAP_Tasks.AssessmentReport.Assessments.All.UnsupportedDataTypes)
+ [

## Large objects (LOBs) are used but target LOB columns are not nullable
](#CHAP_Tasks.AssessmentReport.Assessments.All.LOBsColsNotNullable)
+ [

## Source table with Large objects (LOBs) but without primary keys or unique constraints
](#CHAP_Tasks.AssessmentReport.Assessments.All.LOBsNoPrimaryKey)
+ [

## Source table without primary key for CDC or full load and CDC tasks only
](#CHAP_Tasks.AssessmentReport.Assessments.All.CDCNoPrimaryKey)
+ [

## Target table without primary keys for CDC tasks only
](#CHAP_Tasks.AssessmentReport.Assessments.All.CDCOnlyNoPrimaryKey)
+ [

## Unsupported source primary key types - composite primary keys
](#CHAP_Tasks.AssessmentReport.Assessments.All.CompositeNoPrimaryKey)

## Unsupported data types
<a name="CHAP_Tasks.AssessmentReport.Assessments.All.UnsupportedDataTypes"></a>

**API key:** `unsupported-data-types-in-source`

Checks for data types in the source endpoint that DMS doesn't support. Not all data types can be migrated between engines.

## Large objects (LOBs) are used but target LOB columns are not nullable
<a name="CHAP_Tasks.AssessmentReport.Assessments.All.LOBsColsNotNullable"></a>

**API key:** `full-lob-not-nullable-at-target`

Checks for the nullability of a LOB column in the target when the replication usese full LOB mode or inline LOB mode. DMS requires a LOB column to be null when using these LOB modes. This assessment requires the source and target databases to be relational.

## Source table with Large objects (LOBs) but without primary keys or unique constraints
<a name="CHAP_Tasks.AssessmentReport.Assessments.All.LOBsNoPrimaryKey"></a>

**API key:** `table-with-lob-but-without-primary-key-or-unique-constraint`

 Checks for the presence of source tables with LOBs but without a primary key or a unique key. A table must have a primary key or a unique key for DMS to migrate LOBs. This assessment requires the source database to be relational.

## Source table without primary key for CDC or full load and CDC tasks only
<a name="CHAP_Tasks.AssessmentReport.Assessments.All.CDCNoPrimaryKey"></a>

**API key:** `table-with-no-primary-key-or-unique-constraint`

 Checks for the presence of a primary key or a unique key in source tables for a full-load and change data capture (CDC) migration, or a CDC-only migration. A lack of a primary key or a unique key can cause performance issues during the CDC migration. This assessment requires the source database to be relational, and the migration type to include CDC.

## Target table without primary keys for CDC tasks only
<a name="CHAP_Tasks.AssessmentReport.Assessments.All.CDCOnlyNoPrimaryKey"></a>

**API key:** `target-table-has-unique-key-or-primary-key-for-cdc`

 Checks for the presence of a primary key or a unique key in already-created target tables for a CDC-only migration. A lack of a primary key or a unique key can cause full table scans in the target when DMS applies updates and deletes. This can result in performance issues during the CDC migration. This assessment requires the target database to be relational, and the migration type to include CDC.

## Unsupported source primary key types - composite primary keys
<a name="CHAP_Tasks.AssessmentReport.Assessments.All.CompositeNoPrimaryKey"></a>

**API key:** `unsupported-source-pk-type-for-elasticsearch-target`

Checks for the presence of composite primary keys in source tables when migrating to Amazon OpenSearch Service. The primary key of the source table must consist of a single column. This assessment requires the source database to be relational, and the target database to be DynamoDB.

**Note**  
DMS supports migrating a source database to an OpenSearch Service target where the source primary key consists of multiple columns. 

# Oracle assessments
<a name="CHAP_Tasks.AssessmentReport.Oracle"></a>

For more information about permissions when using Oracle as a source, see [User account privileges required on a self-managed Oracle source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Self-Managed.Privileges) or [User account privileges required on an AWS-managed Oracle source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Amazon-Managed).

**Note**  
This section describes individual premigration assessments for migration tasks that use Oracle as a source or a target for AWS DMS.  
If you are using self-managed Oracle database as a source for AWS DMS, please use following permission set:  

```
grant select on gv_$parameter to dms_user;
                    grant select on v_$instance to dms_user;
                    grant select on v_$version to dms_user;
                    grant select on gv_$ASM_DISKGROUP to dms_user;
                    grant select on gv_$database to dms_user;
                    grant select on DBA_DB_LINKS to to dms_user;
                    grant select on gv_$log_History to dms_user;
                    grant select on gv_$log to dms_user;
                    grant select on dba_types to dms_user;
                    grant select on dba_users to dms_user;
                    grant select on dba_directories to dms_user;
                    grant execute on SYS.DBMS_XMLGEN to dms_user;
```
Additional permissions is required if you are using a self-managed Oracle database as a source for AWS DMS Serverless:  

```
grant select on dba_segments to dms_user;
                    grant select on v_$tablespace to dms_user;
                    grant select on dba_tab_subpartitions to dms_user;
                    grant select on dba_extents to dms_user;
```
If you are using an AWS-managed Oracle database as a source for AWS DMS, use the following set of permissions:  

```
EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('V_$PARAMETER', 'dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('V_$INSTANCE', 'dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('V_$VERSION','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('GV_$ASM_DISKGROUP','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('GV_$DATABASE','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('DBA_DB_LINKS','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('GV_$LOG_HISTORY','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('GV_$LOG','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('DBA_TYPES','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('DBA_USERS','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('DBA_DIRECTORIES','dms_user', 'SELECT');
                    GRANT SELECT ON RDSADMIN.RDS_CONFIGURATION to dms_user;
                    GRANT EXECUTE ON SYS.DBMS_XMLGEN TO dms_user;
```
Additional permissions is required if you are using an AWS-managed Oracle database as a source for AWS DMS Serverless:  

```
EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('DBA_SEGMENTS','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('DBA_TAB_SUBPARTITIONS','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('DBA_EXTENTS','dms_user', 'SELECT');
                    EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('V_$TABLESPACE','dms_user', 'SELECT');
```
If you are using a self-managed Oracle database as a target for AWS DMS, use the following set of permissions:  

```
grant select on v_$instance to dms_user;
                    grant execute on SYS.DBMS_XMLGEN to dms_user;
```
If you are using an AWS-managed Oracle database as a target for AWS DMS, use the following set of permissions:  

```
EXEC RDSADMIN.RDSADMIN_UTIL.GRANT_SYS_OBJECT('V_$INSTANCE', 'dms_user', 'SELECT');
                    GRANT EXECUTE ON SYS.DBMS_XMLGEN TO dms_user;
```

**Topics**
+ [

## Validate that limited LOB mode only is used when `BatchApplyEnabled` is enabled
](#CHAP_Tasks.AssessmentReport.Oracle.LimitedLOBMode)
+ [

## Validate if tables on the source has columns without scale specified for the Number data type
](#CHAP_Tasks.AssessmentReport.Oracle.NumberTypeWithoutScale)
+ [

## Validate triggers on the target database
](#CHAP_Tasks.AssessmentReport.Oracle.TriggersOnTargetDatabase)
+ [

## Validate if source has archivelog `DEST_ID` set to 0
](#CHAP_Tasks.AssessmentReport.Oracle.UseZeroDestIDTrue)
+ [

## Validate if secondary indexes are enabled on the target database during full-load
](#CHAP_Tasks.AssessmentReport.Oracle.SecondaryIndexesEnabled)
+ [

## Validate if tables used in the DMS task scope with BatchApplyEnabled have more than 999 columns
](#CHAP_Tasks.AssessmentReport.Oracle.SetBatchApplyEnabledTrue)
+ [

## Check supplemental logging on database level
](#CHAP_Tasks.AssessmentReport.Oracle.SupplementalLogging)
+ [

## Validate if required DB link is created for Standby
](#CHAP_Tasks.AssessmentReport.Oracle.DbLink)
+ [

## Oracle validation for LOB datatype and if binary reader is configured
](#CHAP_Tasks.AssessmentReport.Oracle.Lob)
+ [

## Validate if the database is CDB
](#CHAP_Tasks.AssessmentReport.Oracle.Cdb)
+ [

## Check the Oracle Database Edition
](#CHAP_Tasks.AssessmentReport.Oracle.Express)
+ [

## Validate Oracle CDC method for DMS
](#CHAP_Tasks.AssessmentReport.Oracle.CdcConfigurations)
+ [

## Validate Oracle RAC configuration for DMS
](#CHAP_Tasks.AssessmentReport.Oracle.Rac)
+ [

## Validate if DMS user has permissions on target
](#CHAP_Tasks.AssessmentReport.Oracle.TargetPermissions)
+ [

## Validate if supplemental logging is required for all columns
](#CHAP_Tasks.AssessmentReport.Oracle.SupplementalLoggingColumns)
+ [

## Validate if supplemental logging is enabled on tables with Primary or Unique keys
](#CHAP_Tasks.AssessmentReport.Oracle.SupplementalLoggingIndexes)
+ [

## Validate if there are SecureFile LOBs and the task is configured for Full LOB mode
](#CHAP_Tasks.AssessmentReport.Oracle.SecureFileLOBs)
+ [

## Validate whether Function-Based Indexes are being used within the tables included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.FunctionBasedIndexes)
+ [

## Validate whether global temporary tables are being used on the tables included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.GlobalTemporaryTables)
+ [

## Validate whether index-organized tables with an overflow segment are being used on the tables included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.IndexOrganizedTables)
+ [

## Validate if multilevel nesting tables are used on the tables included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.MultilevelNestingTables)
+ [

## Validate if invisible columns are used on the tables included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.InvisibleColumns)
+ [

## Validate if materialized views based on a ROWID column are used on the tables included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.RowIDMaterialViews)
+ [

## Validate if Active Data Guard DML Redirect feature is used.
](#CHAP_Tasks.AssessmentReport.Oracle.ActiveDataGuard)
+ [

## Validate if Hybrid Partitioned Tables are used.
](#CHAP_Tasks.AssessmentReport.Oracle.HybridPartitionedTables)
+ [

## Validate if schema-only Oracle accounts are used
](#CHAP_Tasks.AssessmentReport.Oracle.SchemaOnly)
+ [

## Validate if Virtual Columns are used
](#CHAP_Tasks.AssessmentReport.Oracle.VirtualColumns)
+ [

## Validate whether table names defined in the task scope contain apostrophes.
](#CHAP_Tasks.AssessmentReport.Oracle.NamesWithApostrophes)
+ [

## Validate whether the columns defined in the task scope have `XMLType`, `Long`, or `Long Raw` datatypes and verify the LOB mode configuration in the task settings.
](#CHAP_Tasks.AssessmentReport.Oracle.XMLLongRawDatatypes)
+ [

## Validate whether the source Oracle version is supported by AWS DMS.
](#CHAP_Tasks.AssessmentReport.Oracle.SourceOracleVersion)
+ [

## Validate whether the target Oracle version is supported by AWS DMS.
](#CHAP_Tasks.AssessmentReport.Oracle.TargetOracleVersion)
+ [

## Validate whether the DMS user has the required permissions to use data validation.
](#CHAP_Tasks.AssessmentReport.Oracle.DataValidation)
+ [

## Validate if the DMS user has permissions to use Binary Reader with Oracle ASM
](#CHAP_Tasks.AssessmentReport.Oracle.BinaryReaderPrivilegesASM)
+ [

## Validate if the DMS user has permissions to use Binary Reader with Oracle non-ASM
](#CHAP_Tasks.AssessmentReport.Oracle.BinaryReaderPrivilegesNonASM)
+ [

## Validate if the DMS user has permissions to use Binary Reader with CopyToTempFolder method
](#CHAP_Tasks.AssessmentReport.Oracle.BinaryReaderTemp)
+ [

## Validate if the DMS user has permissions to use Oracle Standby as a Source
](#CHAP_Tasks.AssessmentReport.Oracle.StandbySource)
+ [

## Validate if the DMS source is connected to an application container PDB
](#CHAP_Tasks.AssessmentReport.Oracle.AppPdb)
+ [

## Validate if the table has XML datatypes included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.XmlColumns)
+ [

## Validate whether archivelog mode is enabled on the source database.
](#CHAP_Tasks.AssessmentReport.Oracle.Archivelog)
+ [

## Validates the archivelog retention for RDS Oracle.
](#CHAP_Tasks.AssessmentReport.Oracle.ArchivelogRetention)
+ [

## Validate if the table has Extended datatypes included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.ExtendedColumns)
+ [

## Validate the length of the object name included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.30ByteLimit)
+ [

## Validate if the DMS source is connected to an Oracle PDB
](#CHAP_Tasks.AssessmentReport.Oracle.PDBEnabled)
+ [

## Validate if the table has spatial columns included in the task scope.
](#CHAP_Tasks.AssessmentReport.Oracle.SpatialColumns)
+ [

## Validate if the DMS source is connected to an Oracle standby.
](#CHAP_Tasks.AssessmentReport.Oracle.StandbyDB)
+ [

## Validate if the source database tablespace is encrypted using TDE.
](#CHAP_Tasks.AssessmentReport.Oracle.StandbyDB)
+ [

## Validates if the source database uses Automatic Storage Management (ASM)
](#CHAP_Tasks.AssessmentReport.Oracle.ASMSource)
+ [

## Validate if batch apply is enabled and whether the table on the target Oracle database has parallelism enabled at the table or index level
](#CHAP_Tasks.AssessmentReport.Oracle.batchapply)
+ [

## Recommend “Bulk Array Size” parameter by validating the tables in the task scope
](#CHAP_Tasks.AssessmentReport.Oracle.bulkarraysize)
+ [

## Validate if HandleCollationDiff task setting is Configured
](#CHAP_Tasks.AssessmentReport.Oracle.handlecollationdiff)
+ [

## Validate if table has primary key or unique index and its state is VALID when DMS validation is enabled
](#CHAP_Tasks.AssessmentReport.Oracle.pkvalidity)
+ [

## Validate if Binary Reader is used for Oracle Standby as a source
](#CHAP_Tasks.AssessmentReport.Oracle.binaryreader)
+ [

## Validate if the AWS DMS user has the required directory permissions to replicate data from an Oracle RDS Standby database.
](#CHAP_Tasks.AssessmentReport.Oracle.directorypermissions)
+ [

## Validate the type of Oracle Standby used for replication
](#CHAP_Tasks.AssessmentReport.Oracle.physicalstandby)
+ [

## Validate if required directories are created for RDS Oracle standby
](#CHAP_Tasks.AssessmentReport.Oracle.rdsstandby)
+ [

## Validate if Primary Key or Unique Index exists on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.Oracle.batchapplypkui)
+ [

## Validate if both Primary Key and Unique index exist on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.Oracle.batchapplypkuitarget)
+ [

## Validate if unsupported HCC levels are used for Full Load
](#CHAP_Tasks.AssessmentReport.Oracle.hccfullload)
+ [

## Validate if unsupported HCC levels are used for Full Load with CDC
](#CHAP_Tasks.AssessmentReport.Oracle.hccandcdc)
+ [

## Validate if unsupported HCC compression used for CDC
](#CHAP_Tasks.AssessmentReport.Oracle.binaryreaderhcccdc)
+ [

## CDC Recommendation based on source compression method
](#CHAP_Tasks.AssessmentReport.Oracle.cdcmethodbycompression)
+ [

## Check if batch apply is enabled and validate whether the table has more than 999 columns
](#CHAP_Tasks.AssessmentReport.Oracle.batchapplylob)
+ [

## Check Transformation Rule for Digits Randomize
](#CHAP_Tasks.AssessmentReport.Oracle.digits.randomize)
+ [

## Check Transformation Rule for Digits mask
](#CHAP_Tasks.AssessmentReport.Oracle.digits.mask)
+ [

## Check Transformation Rule for Hashing mask
](#CHAP_Tasks.AssessmentReport.Oracle.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Oracle.all.digit.random)
+ [

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Oracle.all.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Oracle.all.digit.mask)
+ [

## Validate that replication to a streaming target does not include LOBs or extended data type columns
](#CHAP_Tasks.AssessmentReport.Oracle.streaming-target)
+ [

## Validate that CDC-only task is configured to use the `OpenTransactionWindow` endpoint setting
](#CHAP_Tasks.AssessmentReport.Oracle.open.tx.window)
+ [

## Validate that at least one selected object exists in the source database
](#CHAP_Tasks.AssessmentReport.Oracle.all.check.source.selection.rules)
+ [

## Validate that target foreign key constraints are disabled for migration
](#CHAP_Tasks.AssessmentReport.Oracle.target.foreign.key.constraints.check)
+ [

## Validate that the Oracle database and AWS DMS versions are compatible
](#CHAP_Tasks.AssessmentReport.Oracle.dms.compatibility.version.check)
+ [

## Validate that secondary constraints and indexes (non-primary) are present in the source database
](#CHAP_Tasks.AssessmentReport.Oracle.all.check.secondary.constraints)
+ [

## Validate that session timeout settings (`IDLE_TIME`) are set to `UNLIMITED`
](#CHAP_Tasks.AssessmentReport.Oracle.check.idle.time)
+ [

## Validate that the AWS DMS user has all required permissions on the source database
](#CHAP_Tasks.AssessmentReport.Oracle.validate.permissions.on.source)
+ [

## Validate that `XMLTYPE` or LOB columns exist when Oracle LogMiner is used
](#CHAP_Tasks.AssessmentReport.Oracle.update.lob.columns)
+ [

## Validate that the target endpoint is not a read replica
](#CHAP_Tasks.AssessmentReport.Oracle.read.replica)
+ [

## Validate that the Oracle target does not have CONTEXT indexes when using direct path load
](#CHAP_Tasks.AssessmentReport.Oracle.directpath.index)
+ [

## Validate that `FailTasksOnLobTruncation` is enabled when using limited LOB mode with existing LOB columns
](#CHAP_Tasks.AssessmentReport.Oracle.FailTasksOnLobTruncation)
+ [

## Validate that `EnableHomogenousPartitionOps` endpoint setting is enabled
](#CHAP_Tasks.AssessmentReport.Oracle.Homogenous.partition)

## Validate that limited LOB mode only is used when `BatchApplyEnabled` is enabled
<a name="CHAP_Tasks.AssessmentReport.Oracle.LimitedLOBMode"></a>

**API key:** `oracle-batch-apply-lob-mode`

This premigration assessment validates whether tables in the DMS task includes LOB columns. If LOB columns are included in the scope of the task, you must use `BatchApplyEnabled` together with limited LOB mode only.

For more information, see [Target metadata task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.html).

## Validate if tables on the source has columns without scale specified for the Number data type
<a name="CHAP_Tasks.AssessmentReport.Oracle.NumberTypeWithoutScale"></a>

**API key:** `oracle-number-columns-without-scale`

This premigration assessment validates whether the DMS task includes columns of NUMBER data type without scale specified. We recommend that you set the endpoint setting `NumberDataTypeScale` to the value specified in the assessment report.

For more information, see [ Endpoint settings when using Oracle as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttrib).

## Validate triggers on the target database
<a name="CHAP_Tasks.AssessmentReport.Oracle.TriggersOnTargetDatabase"></a>

**API key:** `oracle-target-triggers-are-enabled`

This premigration assessment validates whether triggers are enabled on the target database. The assessment will fail if triggers are enabled. We recommend that you disable or remove the triggers during the migration.

For more information, see [ For more information, see DMS best practices](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html).

## Validate if source has archivelog `DEST_ID` set to 0
<a name="CHAP_Tasks.AssessmentReport.Oracle.UseZeroDestIDTrue"></a>

**API key:** `oracle-zero-archive-log-dest-id`

This premigration assessment validates whether endpoint extra connection attribute `useZeroDestid=true` is set for source if archived log `DEST_ID` is set to 0.

For more information, see [ How to handle AWS DMS replication when used with Oracle database in fail-over scenarios](https://aws.amazon.com/blogs/database/how-to-handle-aws-dms-replication-when-used-with-oracle-database-in-fail-over-scenarios/).

## Validate if secondary indexes are enabled on the target database during full-load
<a name="CHAP_Tasks.AssessmentReport.Oracle.SecondaryIndexesEnabled"></a>

**API key:** `oracle-check-secondary-indexes`

This premigration assessment validates whether secondary indexes are enabled during a full-load on the target database. We recommend that you disable or remove the secondary indexes during full-load.

For more information, [ Best practices for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html).

## Validate if tables used in the DMS task scope with BatchApplyEnabled have more than 999 columns
<a name="CHAP_Tasks.AssessmentReport.Oracle.SetBatchApplyEnabledTrue"></a>

**API key:** `oracle-batch-apply-lob-999`

Tables with batch optimized apply mode enabled can't have more than a total of 999 columns. Tables that have more than 999 columns will cause AWS DMS to process the batch one by one, which increases latency. DMS uses the formula **2 \$1 columns\$1in\$1original\$1table \$1 columns\$1in\$1primary\$1key <= 999** to calculate the total number of columns per table supported in batch-optimized apply mode.

For more information, see [ Limitations on Oracle as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html#CHAP_Target.Oracle.Limitations).

## Check supplemental logging on database level
<a name="CHAP_Tasks.AssessmentReport.Oracle.SupplementalLogging"></a>

**API key:** `oracle-supplemental-db-level`

This premigration assessment validates if minimum supplemental logging is enabled at the database level. You must enable supplemental logging to use an Oracle database as a source for migration. 

To enable supplemental logging, use the following query:

```
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
```

For more information, see [Setting up supplemental logging](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Self-Managed.Configuration.SupplementalLogging).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

## Validate if required DB link is created for Standby
<a name="CHAP_Tasks.AssessmentReport.Oracle.DbLink"></a>

**API key:** `oracle-validate-standby-dblink`

This premigration assessment validates if Dblink is created for the Oracle standby database source. AWSDMS\$1DBLINK is a prerequisite for using a standby database as a source. When using Oracle Standby as a source, AWS DMS does not validate open transactions by default.

For more information, see [Working with a self-managed Oracle database as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Self-Managed).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

## Oracle validation for LOB datatype and if binary reader is configured
<a name="CHAP_Tasks.AssessmentReport.Oracle.Lob"></a>

**API key:** `oracle-binary-lob-source-validation`

This premigration assessment validates if Oracle LogMiner is used for an Oracle database endpoint version 12c or later. AWS DMS does not support Oracle LogMiner for migrations of LOB columns from Oracle databases version 12c. This assessment also checks for the presence of LOB columns and provides appropriate recommendations.

To configure your migration to not use Oracle LogMiner, add the following configuration to your source endpoint:

```
useLogMinerReader=N;useBfile=Y;
```

For more information, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](CHAP_Source.Oracle.md#CHAP_Source.Oracle.CDC).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

## Validate if the database is CDB
<a name="CHAP_Tasks.AssessmentReport.Oracle.Cdb"></a>

**API key:** `oracle-validate-cdb`

This premigration assessment validates if the database is a container database. AWS DMS doesn't support the multi-tenant container root database (CDB\$1ROOT). 

**Note**  
This assessment is only required for Oracle versions 12.1.0.1 or later. This assessment is not applicable for Oracle versions prior to 12.1.0.1.

For more information, see [Limitations on using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Limitations).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

## Check the Oracle Database Edition
<a name="CHAP_Tasks.AssessmentReport.Oracle.Express"></a>

**API key:** `oracle-check-cdc-support-express-edition`

This premigration assessment validates if the Oracle source database is Express Edition. AWS DMS doesn't support CDC for Oracle Express Edition (Oracle Database XE) version 18.0 and later.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

## Validate Oracle CDC method for DMS
<a name="CHAP_Tasks.AssessmentReport.Oracle.CdcConfigurations"></a>

**API key:** `oracle-recommendation-cdc-method`

This premigration assessment validates redo log generation for the last seven days, and makes a recommendation whether to use AWS DMS Binary Reader or Oracle LogMiner for CDC.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information about deciding which CDC method to use, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](CHAP_Source.Oracle.md#CHAP_Source.Oracle.CDC).

## Validate Oracle RAC configuration for DMS
<a name="CHAP_Tasks.AssessmentReport.Oracle.Rac"></a>

**API key:** `oracle-check-rac`

This premigration assessment validates if the oracle database is a Real Application Cluster. Real Application Cluster databases must be configured correctly. If the database is based on RAC, we recommend that you use AWS DMS Binary Reader for CDC rather than Oracle LogMiner.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](CHAP_Source.Oracle.md#CHAP_Source.Oracle.CDC).

## Validate if DMS user has permissions on target
<a name="CHAP_Tasks.AssessmentReport.Oracle.TargetPermissions"></a>

**API key:** `oracle-validate-permissions-on-target`

This premigration assessment validates whether DMS users have all the required permissions on the target database. 

## Validate if supplemental logging is required for all columns
<a name="CHAP_Tasks.AssessmentReport.Oracle.SupplementalLoggingColumns"></a>

**API key:** `oracle-validate-supplemental-logging-all-columns`

This premigration assessment validates, for the tables mentioned in the task scope, whether supplemental logging has been added to all columns of tables without a primary or unique key. Without supplemental logging on all columns for a table lacking a primary or unique key, the before-and-after image of the data won't be available in the redo logs. DMS requires supplemental logging for tables without a primary or unique key to generate DML statements. 

## Validate if supplemental logging is enabled on tables with Primary or Unique keys
<a name="CHAP_Tasks.AssessmentReport.Oracle.SupplementalLoggingIndexes"></a>

**API key:** `oracle-validate-supplemental-logging-for-pk`

 This premigration assessment validates whether supplemental logging is enabled for tables with a primary key or unique index and also checks if `AddSupplementalLogging` is enabled at the endpoint level. To ensure DMS can replicate changes, you can either manually add supplemental logging on the table level based on the primary key or unique key or utilize the endpoint setting `AddSupplementalLogging = true` with a DMS user having the ALTER permission on any replicated table. 

## Validate if there are SecureFile LOBs and the task is configured for Full LOB mode
<a name="CHAP_Tasks.AssessmentReport.Oracle.SecureFileLOBs"></a>

**API key:** `oracle-validate-securefile-lobs`

This premigration assessment checks for the presence of SecureFile LOBs in tables within the task scope and verifies their LOB settings. Consider assigning LOB tables to a separate task to enhance performance, as running tasks in full LOB mode may result in slower performance. 

## Validate whether Function-Based Indexes are being used within the tables included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.FunctionBasedIndexes"></a>

**API key:** `oracle-validate-function-based-indexes`

This premigration assessment checks for function-based indexes on tables within the task scope. Note that AWS DMS doesn't support replicating function-based indexes. Consider creating the indexes after your migration on your target database.

## Validate whether global temporary tables are being used on the tables included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.GlobalTemporaryTables"></a>

**API key:** `oracle-validate-global-temporary-tables`

This premigration assessment checks whether global temporary tables are used within the task table-mapping scope. Note that AWS DMS doesn't support migrating or replicating global temporary tables.

## Validate whether index-organized tables with an overflow segment are being used on the tables included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.IndexOrganizedTables"></a>

**API key:** `oracle-validate-iot-overflow-segments`

Validate whether index-organized tables with an overflow segment are being used on the tables included in the task scope. AWS DMS doesn't support CDC for index-organized tables with an overflow segment.

## Validate if multilevel nesting tables are used on the tables included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.MultilevelNestingTables"></a>

**API key:** `oracle-validate-more-than-one-nesting-table-level`

This premigration assessment checks the nesting level of the nested table used on the task scope. AWS DMS supports only one level of table nesting.

## Validate if invisible columns are used on the tables included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.InvisibleColumns"></a>

**API key:** `oracle-validate-invisible-columns`

This premigration assessment validates whether the tables used in the task scope have invisible columns. AWS DMS doesn't migrate data from invisible columns in your source database. To migrate the columns that are invisible, you need to modify them to be visible.

## Validate if materialized views based on a ROWID column are used on the tables included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.RowIDMaterialViews"></a>

**API key:** `oracle-validate-rowid-based-materialized-views`

This premigration assessment validates whether the materialized views used in the migration are created based on the ROWID column. AWS DMS doesn't support the ROWID data type or materialized views based on a ROWID column.

## Validate if Active Data Guard DML Redirect feature is used.
<a name="CHAP_Tasks.AssessmentReport.Oracle.ActiveDataGuard"></a>

**API key:** `oracle-validate-adg-redirect-dml`

This premigration assessment validates whether the Active Data Guard DML Redirect feature is used. When using Oracle 19.0 as the source, AWS DMS doesn't support the Data Guard DML Redirect feature.

## Validate if Hybrid Partitioned Tables are used.
<a name="CHAP_Tasks.AssessmentReport.Oracle.HybridPartitionedTables"></a>

**API key:** `oracle-validate-hybrid-partitioned-tables`

This premigration assessment validates whether hybrid partitioned tables are used for the tables defined in the task scope.

## Validate if schema-only Oracle accounts are used
<a name="CHAP_Tasks.AssessmentReport.Oracle.SchemaOnly"></a>

**API key:** `oracle-validate-schema-only-accounts`

This premigration assessment validates whether Schema-Only Accounts are found within the task scope.

## Validate if Virtual Columns are used
<a name="CHAP_Tasks.AssessmentReport.Oracle.VirtualColumns"></a>

**API key:** `oracle-validate-virtual-columns`

This premigration assessment validates whether the Oracle Instance has Virtual Columns in tables within the task scope.

## Validate whether table names defined in the task scope contain apostrophes.
<a name="CHAP_Tasks.AssessmentReport.Oracle.NamesWithApostrophes"></a>

**API key:** `oracle-validate-names-with-apostrophes`

This premigration assessment validates whether the tables used in the task scope contain apostrophes. AWS DMS doesn't replicate tables with names containing apostrophes. If identified, consider renaming such tables. Alternatively, you could create a view or materialized view without apostrophes to load these tables.

## Validate whether the columns defined in the task scope have `XMLType`, `Long`, or `Long Raw` datatypes and verify the LOB mode configuration in the task settings.
<a name="CHAP_Tasks.AssessmentReport.Oracle.XMLLongRawDatatypes"></a>

**API key:** `oracle-validate-limited-lob-mode-for-longs`

This premigration assessment validates whether the tables defined in the task scope have the datatypes `XMLType`, `Long`, or `Long Raw`, and checks if the task setting is configured to use Limited Size LOB Mode. AWS DMS doesn't support replicating these datatypes using FULL LOB mode. Consider changing the task setting to use Limited Size LOB mode upon identifying tables with such datatypes.

## Validate whether the source Oracle version is supported by AWS DMS.
<a name="CHAP_Tasks.AssessmentReport.Oracle.SourceOracleVersion"></a>

**API key:** `oracle-validate-supported-versions-of-source`

 This premigration assessment validates if the source Oracle instance version is supported by AWS DMS.

## Validate whether the target Oracle version is supported by AWS DMS.
<a name="CHAP_Tasks.AssessmentReport.Oracle.TargetOracleVersion"></a>

**API key:** `oracle-validate-supported-versions-of-target`

This premigration assessment validates if the target Oracle instance version is supported by AWS DMS.

## Validate whether the DMS user has the required permissions to use data validation.
<a name="CHAP_Tasks.AssessmentReport.Oracle.DataValidation"></a>

**API key:** `oracle-prerequisites-privileges-of-validation-feature`

This premigration assessment validates whether the DMS user has the necessary privileges to use DMS Data Validation. You can ignore enabling this validation if you do not intend to use data validation.

## Validate if the DMS user has permissions to use Binary Reader with Oracle ASM
<a name="CHAP_Tasks.AssessmentReport.Oracle.BinaryReaderPrivilegesASM"></a>

**API key:** `oracle-prerequisites-privileges-of-binary-reader-asm`

This premigration assessment validates whether the DMS user has the necessary privileges to use Binary Reader on the Oracle ASM instance. You can ignore enabling this assessment if your source is not an Oracle ASM instance or if you are not using Binary Reader for CDC.

## Validate if the DMS user has permissions to use Binary Reader with Oracle non-ASM
<a name="CHAP_Tasks.AssessmentReport.Oracle.BinaryReaderPrivilegesNonASM"></a>

**API key:** `oracle-prerequisites-privileges-of-binary-reader-non-asm`

This premigration assessment validates whether the DMS user has the necessary privileges to use Binary Reader on the Oracle non-ASM instance. This assessment is only valid if you have an Oracle non-ASM instance.

## Validate if the DMS user has permissions to use Binary Reader with CopyToTempFolder method
<a name="CHAP_Tasks.AssessmentReport.Oracle.BinaryReaderTemp"></a>

**API key:** `oracle-prerequisites-privileges-of-binary-reader-copy-to-temp-folder`

This premigration assessment validates whether the DMS user has the necessary privileges to use the Binary Reader with the 'Copy to Temp Folder' method. This assessment is relevant only if you are planning to use CopyToTempFolder to read CDC changes while using the Binary Reader, and have an ASM instance connected to the source. You can ignore enabling this assessment if you don't intend to use the CopyToTempFolder feature.

We recommend not using the CopyToTempFolder feature because it is deprecated.

## Validate if the DMS user has permissions to use Oracle Standby as a Source
<a name="CHAP_Tasks.AssessmentReport.Oracle.StandbySource"></a>

**API key:** `oracle-prerequisites-privileges-of-standby-as-source`

This premigration assessment validates whether the DMS user has the necessary privileges to use a StandBy Oracle Instance as a source. You can ignore enabling this assessment if you don't intend to use a StandBy Oracle Instance as a source.

## Validate if the DMS source is connected to an application container PDB
<a name="CHAP_Tasks.AssessmentReport.Oracle.AppPdb"></a>

**API key:** `oracle-check-app-pdb`

This premigration assessment validates whether the DMS source is connected to an application container PDB. DMS doesn't support replication from an application container PDB.

## Validate if the table has XML datatypes included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.XmlColumns"></a>

**API key:** `oracle-check-xml-columns`

This premigration assessment validates whether the tables used in the task scope have XML datatypes. It also checks if the task is configured for Limited LOB mode when the table contains an XML datatype. DMS only supports Limited LOB mode for migrating Oracle XML Columns.

## Validate whether archivelog mode is enabled on the source database.
<a name="CHAP_Tasks.AssessmentReport.Oracle.Archivelog"></a>

**API key:** `oracle-check-archivelog-mode`

This premigration assessment validates whether archivelog mode is enabled on the source database. Enabling archive log mode on the source database is necessary for DMS to replicate changes.

## Validates the archivelog retention for RDS Oracle.
<a name="CHAP_Tasks.AssessmentReport.Oracle.ArchivelogRetention"></a>

**API key:** `oracle-check-archivelog-retention-rds`

This premigration assessment validates whether archivelog retention on your RDS Oracle database is configured for at least 24 hours.

## Validate if the table has Extended datatypes included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.ExtendedColumns"></a>

**API key:** `oracle-check-extended-columns`

This premigration assessment validates whether the tables used in the task scope have extended datatypes. Note that extended datatypes are supported only with DMS version 3.5 onwards.

## Validate the length of the object name included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.30ByteLimit"></a>

**API key:** `oracle-check-object-30-bytes-limit`

This premigration assessment validates whether the length of the object name exceeds 30 bytes. DMS doesn't support long object names (over 30 bytes).

## Validate if the DMS source is connected to an Oracle PDB
<a name="CHAP_Tasks.AssessmentReport.Oracle.PDBEnabled"></a>

**API key:** `oracle-check-pdb-enabled`

This premigration assessment validates whether the DMS source is connected to a PDB. DMS supports CDC only when using the Binary Reader with Oracle PDB as the source. The assessment also evaluates if the task is configured to use the binary reader when DMS is connected to Oracle PDB. 

## Validate if the table has spatial columns included in the task scope.
<a name="CHAP_Tasks.AssessmentReport.Oracle.SpatialColumns"></a>

**API key:** `oracle-check-spatial-columns`

This premigration assessment validates whether the table has spatial columns included in the task scope. DMS supports Spatial datatypes only using Full LOB mode. The assessment also evaluates whether the task is configured to use Full LOB mode when DMS identifies spatial columns. 

## Validate if the DMS source is connected to an Oracle standby.
<a name="CHAP_Tasks.AssessmentReport.Oracle.StandbyDB"></a>

**API key:** `oracle-check-standby-db`

This premigration assessment validates whether the source is connected to an Oracle standby. DMS supports CDC only when using the binary reader with Oracle Standby as the source. The assessment also evaluates if the task is configured to use binary reader when DMS is connected to Oracle Standby. 

## Validate if the source database tablespace is encrypted using TDE.
<a name="CHAP_Tasks.AssessmentReport.Oracle.StandbyDB"></a>

**API key:** `oracle-check-tde-enabled`

This premigration assessment validates whether the source has TDE Encryption enabled on the tablespace. DMS supports TDE only with encrypted tablespaces when using Oracle LogMiner for RDS Oracle.

## Validates if the source database uses Automatic Storage Management (ASM)
<a name="CHAP_Tasks.AssessmentReport.Oracle.ASMSource"></a>

**API key:** `oracle-check-asm`

This premigration assessment detects if your source database uses ASM. For optimal performance, configure Binary Reader with `parallelASMReadThreads` and `readAheadBlocks` parameters in your endpoint settings. These settings enhance data extraction performance when working with ASM storage

For more information, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](CHAP_Source.Oracle.md#CHAP_Source.Oracle.CDC).

## Validate if batch apply is enabled and whether the table on the target Oracle database has parallelism enabled at the table or index level
<a name="CHAP_Tasks.AssessmentReport.Oracle.batchapply"></a>

**API key:** `oracle-check-degree-of-parallelism`

AWS DMS validates that the table in the target database has any parallelism enabled. Having parallelism enabled on the target database causes the batch process to fail. Therefore, it is required to disable parallelism at the table or index level when using the batch apply feature.

## Recommend “Bulk Array Size” parameter by validating the tables in the task scope
<a name="CHAP_Tasks.AssessmentReport.Oracle.bulkarraysize"></a>

**API key:** `oracle-check-bulk-array-size`

This assessment recommends setting the `BulkArraySize` ECA (Extra Connection Attribute) if there are no tables with LOB (Large Object) data types found within the task scope. Setting the `BulkArraySize` ECA can improve performance of the full load phase of the migration. You can set the bulk array size using the ECA on the Source/Target endpoint to get optimal performance during the full load phase of the migration.

## Validate if HandleCollationDiff task setting is Configured
<a name="CHAP_Tasks.AssessmentReport.Oracle.handlecollationdiff"></a>

**API key:** `oracle-check-handlecollationdiff`

This assessment validates if DMS task is configured for validation and recommend `HandleCollationDiff` task setting to avoid any incorrect validation results while validating data between Oracle and PostgreSQL. 

For more information, see [Data validation task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.DataValidation.md).

## Validate if table has primary key or unique index and its state is VALID when DMS validation is enabled
<a name="CHAP_Tasks.AssessmentReport.Oracle.pkvalidity"></a>

**API key:** `oracle-check-pk-validity`

Data validation requires that the table has a primary key or unique index on both source and target. 

For more information, see [AWS DMS data validation](CHAP_Validating.md).

## Validate if Binary Reader is used for Oracle Standby as a source
<a name="CHAP_Tasks.AssessmentReport.Oracle.binaryreader"></a>

**API key:** `oracle-check-binary-reader`

This assessment validates if the source database is a standby database and is using the Binary Reader for Change Data Capture (CDC). 

For more information, see [Using an Oracle database as a source for AWS DMS](CHAP_Source.Oracle.md).

## Validate if the AWS DMS user has the required directory permissions to replicate data from an Oracle RDS Standby database.
<a name="CHAP_Tasks.AssessmentReport.Oracle.directorypermissions"></a>

**API key:** `oracle-check-directory-permissions`

This assessment validates if the AWS DMS user has the required read privileges on the `ARCHIVELOG_DIR_%` and `ONLINELOG_DIR_%` directories when the source database is an Oracle RDS Standby. 

For more information, see [Working with an AWS-managed Oracle database as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Amazon-Managed).

## Validate the type of Oracle Standby used for replication
<a name="CHAP_Tasks.AssessmentReport.Oracle.physicalstandby"></a>

**API key:** `oracle-check-physical-standby-with-apply`

This assessment validates the type of Oracle standby database used for the AWS DMS replication. AWS DMS only supports Physical standby databases, which must be opened in Read Only mode with the redo logs being applied automatically. AWS DMS does not support Snapshot or Logical standby databases for replication. 

For more information, see [Using a self-managed Oracle Standby as a source with Binary Reader for CDC in AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Self-Managed.BinaryStandby).

## Validate if required directories are created for RDS Oracle standby
<a name="CHAP_Tasks.AssessmentReport.Oracle.rdsstandby"></a>

**API key:** `oracle-check-rds-standby-directories`

This assessment validates if the required oracle directories are created for archive logs and online logs on the RDS Standby instance.

For more information, see [Using an Amazon RDS Oracle Standby (read replica) as a source with Binary Reader for CDC in AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Amazon-Managed.StandBy).

## Validate if Primary Key or Unique Index exists on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.Oracle.batchapplypkui"></a>

**API key:** `oracle-check-batch-apply-target-pk-ui-absence`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables without Primary Keys or Unique Indexes causes the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can create a unique key on the target table.

For more information, see [Using an Oracle database as a target for AWS Database Migration Service](CHAP_Target.Oracle.md).

## Validate if both Primary Key and Unique index exist on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.Oracle.batchapplypkuitarget"></a>

**API key:** `oracle-check-batch-apply-target-pk-ui-simultaneously`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables with Primary Keys and Unique Indexes simultaneously causes the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can drop a unique key(s) or primary key on the target table and rebuild it if you are doing migration.

For more information, see [Using an Oracle database as a target for AWS Database Migration Service](CHAP_Target.Oracle.md).

## Validate if unsupported HCC levels are used for Full Load
<a name="CHAP_Tasks.AssessmentReport.Oracle.hccfullload"></a>

**API key:** `oracle-check-binary-reader-hcc-full-load`

The Oracle source endpoint is configured to use Binary Reader, the Query Low level of the HCC compression method is supported for full-load tasks only.

For more information, see [Supported compression methods for using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Compression).

## Validate if unsupported HCC levels are used for Full Load with CDC
<a name="CHAP_Tasks.AssessmentReport.Oracle.hccandcdc"></a>

**API key:** `oracle-check-binary-reader-hcc-full-load-and-cdc`

The Oracle source endpoint is configured to use Binary Reader, HCC with Query low is supported for Full Load task only.

[Supported compression methods for using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Compression)

## Validate if unsupported HCC compression used for CDC
<a name="CHAP_Tasks.AssessmentReport.Oracle.binaryreaderhcccdc"></a>

**API key:** `oracle-check-binary-reader-hcc-cdc`

The Oracle source endpoint is configured to use Binary Reader. Binary Reader doesn't support Query low for tasks with CDC.

For more information, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](CHAP_Source.Oracle.md#CHAP_Source.Oracle.CDC).

## CDC Recommendation based on source compression method
<a name="CHAP_Tasks.AssessmentReport.Oracle.cdcmethodbycompression"></a>

**API key:** `oracle-recommend-cdc-method-by-compression`

Compressed objects are detected. Please navigate into the Result section of the specific assessment for further recommendation.

For more information, see [Using Oracle LogMiner or AWS DMS Binary Reader for CDC](CHAP_Source.Oracle.md#CHAP_Source.Oracle.CDC).

## Check if batch apply is enabled and validate whether the table has more than 999 columns
<a name="CHAP_Tasks.AssessmentReport.Oracle.batchapplylob"></a>

**API key:** `oracle-batch-apply-lob-999`

DMS uses the `2 * columns_in_original_table + columns_in_primary_key` formula to determine the number of columns on customer table. Based on this formula, we have identified tables with more than 999 columns. This impacts the batch process, causing it to fail and switch to one-by-one mode.

For more information, see [Limitations on Oracle as a target for AWS Database Migration Service](CHAP_Target.Oracle.md#CHAP_Target.Oracle.Limitations).

## Check Transformation Rule for Digits Randomize
<a name="CHAP_Tasks.AssessmentReport.Oracle.digits.randomize"></a>

**API key**: `oracle-datamasking-digits-randomize`

This assessment validates whether columns used in table mappings are compatible with the Digits Randomize transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Check Transformation Rule for Digits mask
<a name="CHAP_Tasks.AssessmentReport.Oracle.digits.mask"></a>

**API key**: `oracle-datamasking-digits-mask`

This assessment validates whether any columns used in the table mapping are not supported by the Digits Mask transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying Digits Mask transformations to such columns could cause DMS task failures since uniqueness cannot be guaranteed.

## Check Transformation Rule for Hashing mask
<a name="CHAP_Tasks.AssessmentReport.Oracle.hash.mask"></a>

**API key**: `oracle-datamasking-hash-mask`

This assessment validates whether any of the columns used in the table mapping are not supported by the Hashing Mask transformation rule. It also checks if the length of the source column exceeds 64 characters. Ideally, the target column length should be greater than 64 characters to support hash masking. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Oracle.all.digit.random"></a>

**API key**: `all-to-all-validation-with-datamasking-digits-randomize`

This premigration assessment verifies that Data Validation setting and Data Masking Digit randomization are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Oracle.all.hash.mask"></a>

**API key**: `all-to-all-validation-with-datamasking-hash-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Hashing mask are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Oracle.all.digit.mask"></a>

**API key**: `all-to-all-validation-with-digit-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Digit mask are not simultaneously enabled, as these features are incompatible.

## Validate that replication to a streaming target does not include LOBs or extended data type columns
<a name="CHAP_Tasks.AssessmentReport.Oracle.streaming-target"></a>

**API key**: `oracle-validate-lob-to-streaming-target`

This assessment identifies potential data loss when migrating LOB or extended data types to streaming target endpoints (such as S3, Kinesis, or Kafka). Oracle database does not track changes to these data types in its log files causing DMS to write `NULL` values to the streaming target. To prevent data loss, you can implement a '`before`' trigger on the source database which forces Oracle to log these changes.

## Validate that CDC-only task is configured to use the `OpenTransactionWindow` endpoint setting
<a name="CHAP_Tasks.AssessmentReport.Oracle.open.tx.window"></a>

**API key**: `oracle-check-cdc-open-tx-window`

For CDC-only tasks, use `OpenTransactionWindow` to avoid missing data. For more information, see [Creating tasks for ongoing replication using AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html).

## Validate that at least one selected object exists in the source database
<a name="CHAP_Tasks.AssessmentReport.Oracle.all.check.source.selection.rules"></a>

**API key**: `all-check-source-selection-rules`

This premigration assessment verifies that at least one object specified in the selection rules exists in the source database, including pattern matching for wildcard-based rules.

## Validate that target foreign key constraints are disabled for migration
<a name="CHAP_Tasks.AssessmentReport.Oracle.target.foreign.key.constraints.check"></a>

**API key**: `oracle-target-foreign-key-constraints-check`

This premigration assessment detects active foreign key constraints on the target database that can cause migration failures (ORA-02291).

## Validate that the Oracle database and AWS DMS versions are compatible
<a name="CHAP_Tasks.AssessmentReport.Oracle.dms.compatibility.version.check"></a>

**API key**: `oracle-dms-compatibility-version-check`

This premigration assessment detects if your Oracle database version is incompatible with your AWS DMS version. This mismatch can cause task failures due to unsupported Oracle Redo compatibility settings.

## Validate that secondary constraints and indexes (non-primary) are present in the source database
<a name="CHAP_Tasks.AssessmentReport.Oracle.all.check.secondary.constraints"></a>

**API key**: `all-check-secondary-constraints`

This premigration assessment verifies that secondary constraints and indexes (foreign keys, check constraints, non-clustered indexes) are present in the source database.

## Validate that session timeout settings (`IDLE_TIME`) are set to `UNLIMITED`
<a name="CHAP_Tasks.AssessmentReport.Oracle.check.idle.time"></a>

**API key**: `oracle-check-idle-time`

This premigration assessment verifies that the Oracle database `IDLE_TIME` parameter is set to `UNLIMITED` for the AWS DMS user. Limited session timeout can cause migration task failures due to connection timeouts.

## Validate that the AWS DMS user has all required permissions on the source database
<a name="CHAP_Tasks.AssessmentReport.Oracle.validate.permissions.on.source"></a>

**API key**: `oracle-validate-permissions-on-source`

This premigration assessment verifies that the AWS DMS user has been configured with all required permissions on the source database.

## Validate that `XMLTYPE` or LOB columns exist when Oracle LogMiner is used
<a name="CHAP_Tasks.AssessmentReport.Oracle.update.lob.columns"></a>

**API key**: `oracle-update-lob-columns`

This premigration assessment warns that `XMLTYPE` or LOB columns exist in the source database when Oracle LogMiner is used.

## Validate that the target endpoint is not a read replica
<a name="CHAP_Tasks.AssessmentReport.Oracle.read.replica"></a>

**API key**: `all-check-target-read-replica`

This premigration assessment verifies that the target endpoint is not configured as a read replica. AWS DMS requires write access to the target database and cannot replicate to read-only replicas.

## Validate that the Oracle target does not have CONTEXT indexes when using direct path load
<a name="CHAP_Tasks.AssessmentReport.Oracle.directpath.index"></a>

**API key**: `oracle-check-direct-path-context-indexes`

This premigration assessment validates whether target Oracle tables contain CONTEXT indexes. AWS DMS does not support CONTEXT indexes when using direct path in full-load mode. To avoid failure, disable direct path full-load mode or remove CONTEXT indexes before the load.

For more information, see [Limitations on using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Limitations).

## Validate that `FailTasksOnLobTruncation` is enabled when using limited LOB mode with existing LOB columns
<a name="CHAP_Tasks.AssessmentReport.Oracle.FailTasksOnLobTruncation"></a>

**API key**: `oracle-pg-lobs-with-failtasksonlobtruncation`

This premigration assessment validates whether the `FailTasksOnLobTruncation` extra connection attribute is set to true when LOB columns are present in selected tables and limited LOB mode is specified. This setting fails the task if any LOB data is truncated during migration, preventing silent data loss.

For more information, see [Endpoint settings when using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.ConnectionAttrib).

## Validate that `EnableHomogenousPartitionOps` endpoint setting is enabled
<a name="CHAP_Tasks.AssessmentReport.Oracle.Homogenous.partition"></a>

**API key**: `oracle-homogenous-partition-ops`

This premigration assessment validates that the `EnableHomogenousPartitionOps` endpoint setting is enabled for Oracle homogeneous migrations. This setting is required for AWS DMS to replicate Oracle partition and subpartition DDL operations.

For more information, see [Limitations on using Oracle as a source for AWS DMS](CHAP_Source.Oracle.md#CHAP_Source.Oracle.Limitations).

# Sql Server assessments
<a name="CHAP_Tasks.AssessmentReport.SqlServer"></a>

This section describes individual premigration assessments for migration tasks that use a Microsoft SQL Server source endpoint.

**Topics**
+ [

## Validate if secondary indexes are enabled on the target database during full-load
](#CHAP_Tasks.AssessmentReport.SqlServer.SecondaryIndexesEnabled)
+ [

## Validate that limited LOB mode only is used when `BatchApplyEnabled` is set to true
](#CHAP_Tasks.AssessmentReport.SqlServer.LimitedLOBMode)
+ [

## Validate if target database has any triggers enabled on tables in the scope of the task
](#CHAP_Tasks.AssessmentReport.SqlServer.TargetDatabaseTriggersEnabled)
+ [

## Check if tables in task scope contain computed columns
](#CHAP_Tasks.AssessmentReport.SqlServer.ComputedColumns)
+ [

## Check if tables in task scope have column store indexes
](#CHAP_Tasks.AssessmentReport.SqlServer.ColumnstoreIndexes)
+ [

## Check if memory optimized tables are a part of the task scope
](#CHAP_Tasks.AssessmentReport.SqlServer.MemoryOptimized)
+ [

## Check if temporal tables are a part of the task scope
](#CHAP_Tasks.AssessmentReport.SqlServer.TemporalTables)
+ [

## Check if delayed durability is enabled at the database level
](#CHAP_Tasks.AssessmentReport.SqlServer.DelayedDurability)
+ [

## Check if accelerated data recovery is enabled at the database level
](#CHAP_Tasks.AssessmentReport.SqlServer.AcceleratedRecovery)
+ [

## Check if table mapping has more than 10K tables with primary keys
](#CHAP_Tasks.AssessmentReport.SqlServer.TableMapping)
+ [

## Check if the source database has tables or schema names with special characters.
](#CHAP_Tasks.AssessmentReport.SqlServer.SpecialCharacters)
+ [

## Check if the source database has column names with masked data
](#CHAP_Tasks.AssessmentReport.SqlServer.MaskedData)
+ [

## Check if the source database has encrypted backups
](#CHAP_Tasks.AssessmentReport.SqlServer.EncryptedBackups)
+ [

## Check if the source database has backups stored at a URL or on Windows Azure.
](#CHAP_Tasks.AssessmentReport.SqlServer.RemoteBackups)
+ [

## Check if the source database has backups on multiple disks
](#CHAP_Tasks.AssessmentReport.SqlServer.MultipleDisks)
+ [

## Check if the source database has at least one full backup
](#CHAP_Tasks.AssessmentReport.SqlServer.FullBackup)
+ [

## Check if the source database has sparse columns and columnar structure compression.
](#CHAP_Tasks.AssessmentReport.SqlServer.SparseOrStructureCompression)
+ [

## Check if the source database instance has server level auditing for SQL Server 2008 or SQL Server 2008 R2
](#CHAP_Tasks.AssessmentReport.SqlServer.Audit)
+ [

## Check if the source database has geometry columns for full LOB mode
](#CHAP_Tasks.AssessmentReport.SqlServer.GeometryColumns)
+ [

## Check if the source database has columns with the Identity property.
](#CHAP_Tasks.AssessmentReport.SqlServer.Identity)
+ [

## Check if the DMS user has FULL LOAD permissions
](#CHAP_Tasks.AssessmentReport.SqlServer.FullLoadPermissions)
+ [

## Check if the DMS user has FULL LOAD and CDC or CDC only permissions
](#CHAP_Tasks.AssessmentReport.SqlServer.FullLoadCDCPermissions)
+ [

## Check whether MS-Replication is enabled for CDC on on-premises or EC2 databases.
](#CHAP_Tasks.AssessmentReport.SqlServer.IgnoreMsReplicationEnablement)
+ [

## Check if the DMS user has the VIEW DEFINITION permission.
](#CHAP_Tasks.AssessmentReport.SqlServer.ViewDefinition)
+ [

## Check if the DMS user has the VIEW DATABASE STATE permission on the MASTER database for users without the Sysadmin role.
](#CHAP_Tasks.AssessmentReport.SqlServer.ViewDatabaseState)
+ [

## Check if the DMS user has the VIEW SERVER STATE permission.
](#CHAP_Tasks.AssessmentReport.SqlServer.)
+ [

## Validate if text repl size parameter is not unlimited
](#CHAP_Tasks.AssessmentReport.Sqlserver.replsizeparameter)
+ [

## Validate if Primary Key or Unique Index exist on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.Sqlserver.batchapply)
+ [

## Validate if both Primary Key and Unique index exist on target when batch apply enabled
](#CHAP_Tasks.AssessmentReport.Sqlserver.batchapplysimultaneously)
+ [

## Validate if table has primary key or unique index when DMS validation is enabled
](#CHAP_Tasks.AssessmentReport.Sqlserver.dmsvalidation)
+ [

## Validate if AWS DMS user has necessary privileges to the target
](#CHAP_Tasks.AssessmentReport.Sqlserver.dmsprivileges)
+ [

## Recommendation on using MaxFullLoadSubTasks setting
](#CHAP_Tasks.AssessmentReport.Sqlserver.maxfullloadsubtask)
+ [

## Check Transformation Rule for Digits Randomize
](#CHAP_Tasks.AssessmentReport.Sqlserver.gigits.randomise)
+ [

## Check Transformation Rule for Digits mask
](#CHAP_Tasks.AssessmentReport.Sqlserver.digits.mask)
+ [

## Check Transformation Rule for Hashing mask
](#CHAP_Tasks.AssessmentReport.Sqlserver.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Sqlserver.all.digits.random)
+ [

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Sqlserver.all.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Sqlserver.all.digit.mask)
+ [

## Validate that at least one selected object exists in the source database
](#CHAP_Tasks.AssessmentReport.Sqlserver.selection.rules)
+ [

## Validate that secondary constraints and indexes (non-primary) are present in the source database
](#CHAP_Tasks.AssessmentReport.Sqlserver.secondary.constraints)
+ [

## Validate that target endpoint is not a read replica
](#CHAP_Tasks.AssessmentReport.Sqlserver.target.replica)
+ [

## Validate backup chain
](#CHAP_Tasks.AssessmentReport.Sqlserver.backup.chain)
+ [

## Check database user permissions for applying `EXCLUSIVE_AUTOMATIC_TRUNCATION` safeguard policy
](#CHAP_Tasks.AssessmentReport.Sqlserver.safeguard.permission)
+ [

## Validate that secondary node connection and required safeguard attributes for AWS DMS source endpoint
](#CHAP_Tasks.AssessmentReport.Sqlserver.node.safeguard.policy)
+ [

## Validate that endpoint has all required extra connection attributes (ECAs) when AWS DMS is connected to secondary node
](#CHAP_Tasks.AssessmentReport.Sqlserver.node.without.eca)

## Validate if secondary indexes are enabled on the target database during full-load
<a name="CHAP_Tasks.AssessmentReport.SqlServer.SecondaryIndexesEnabled"></a>

**API key:** `sqlserver-check-secondary-indexes`

This premigration assessment validates whether secondary indexes are enabled during full-load on the target database. We recommend that you disable or remove secondary indexes.

For more information, [Best practices for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html).

## Validate that limited LOB mode only is used when `BatchApplyEnabled` is set to true
<a name="CHAP_Tasks.AssessmentReport.SqlServer.LimitedLOBMode"></a>

**API key:** `sqlserver-batch-apply-lob-mode`

This premigration assessment validates whether the DMS task includes LOB columns. If LOB columns are included into the scope of the task, you must use `BatchApplyEnabled` together with limited LOB mode only. We recommend that you create separate tasks for such tables and use transactional apply mode instead.

For more information, see [How can I use the DMS batch apply feature to improve CDC replication performance?](https://repost.aws/knowledge-center/dms-batch-apply-cdc-replication).

## Validate if target database has any triggers enabled on tables in the scope of the task
<a name="CHAP_Tasks.AssessmentReport.SqlServer.TargetDatabaseTriggersEnabled"></a>

**API key:** `sqlserver-check-for-triggers`

AWS DMS identified triggers in the target database that can impact the performance of the full-load DMS task and latency on target. Make sure that these triggers are disabled during a task run and enabled during the cut-over period.

## Check if tables in task scope contain computed columns
<a name="CHAP_Tasks.AssessmentReport.SqlServer.ComputedColumns"></a>

**API key:** `sqlserver-check-for-computed-fields`

This premigration assessment checks for the presence of computed columns. AWS DMS doesn't support replicating changes from SQL Server computed columns.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if tables in task scope have column store indexes
<a name="CHAP_Tasks.AssessmentReport.SqlServer.ColumnstoreIndexes"></a>

**API key:** `sqlserver-check-for-columnstore-indexes`

This premigration assessment checks for the presence of tables with columnstore indexes. AWS DMS doesn't support replicating changes from SQL Server tables with columnstore indexes.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if memory optimized tables are a part of the task scope
<a name="CHAP_Tasks.AssessmentReport.SqlServer.MemoryOptimized"></a>

**API key:** `sqlserver-check-for-memory-optimized-tables`

This premigration assessment checks for the presence of memory-optimized tables. AWS DMS doesn't support replicating changes from memory-optimized tables.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if temporal tables are a part of the task scope
<a name="CHAP_Tasks.AssessmentReport.SqlServer.TemporalTables"></a>

**API key:** `sqlserver-check-for-temporal-tables`

This premigration assessment checks for the presence of temporal tables. AWS DMS doesn't support replicating changes from temporal tables.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if delayed durability is enabled at the database level
<a name="CHAP_Tasks.AssessmentReport.SqlServer.DelayedDurability"></a>

**API key:** `sqlserver-check-for-delayed-durability`

This premigration assessment checks for the presence of delayed durability. AWS DMS doesn't support replicating changes from transactions that use delayed durability.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if accelerated data recovery is enabled at the database level
<a name="CHAP_Tasks.AssessmentReport.SqlServer.AcceleratedRecovery"></a>

**API key:** `sqlserver-check-for-accelerated-data-recovery`

This premigration assessment checks for the presence of accelerated data recovery. AWS DMS doesn't support replicating changes from databases with accelerated data recovery.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if table mapping has more than 10K tables with primary keys
<a name="CHAP_Tasks.AssessmentReport.SqlServer.TableMapping"></a>

**API key:** `sqlserver-large-number-of-tables`

This premigration assessment checks for the presence of more than 10,000 tables with primary keys. Databases configured with MS-Replication can experience task failures if there are too many tables with primary keys.

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration.

For more information about configuring MS-Replication, see [Capturing data changes for ongoing replication from SQL Server](CHAP_Source.SQLServer.CDC.md).

## Check if the source database has tables or schema names with special characters.
<a name="CHAP_Tasks.AssessmentReport.SqlServer.SpecialCharacters"></a>

**API key:** `sqlserver-check-for-special-characters`

This premigration assessment verifies whether the source database has table or schema names that include a character from the following set:

```
\\ -- \n \" \b \r ' \t ;
```

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has column names with masked data
<a name="CHAP_Tasks.AssessmentReport.SqlServer.MaskedData"></a>

**API key:** `sqlserver-check-for-masked-data`

This premigration assessment verifies whether the source database has masked data. AWS DMS migrates masked data without masking.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has encrypted backups
<a name="CHAP_Tasks.AssessmentReport.SqlServer.EncryptedBackups"></a>

**API key:** `sqlserver-check-for-encrypted-backups`

This premigration assessment verifies whether the source database has encrypted backups.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has backups stored at a URL or on Windows Azure.
<a name="CHAP_Tasks.AssessmentReport.SqlServer.RemoteBackups"></a>

**API key:** `sqlserver-check-for-backup-url`

This premigration assessment verifies whether the source database has backups stored at a URL or on Windows Azure.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has backups on multiple disks
<a name="CHAP_Tasks.AssessmentReport.SqlServer.MultipleDisks"></a>

**API key:** `sqlserver-check-for-backup-multiple-stripes`

This premigration assessment verifies whether the source database has backups on multiple disks.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has at least one full backup
<a name="CHAP_Tasks.AssessmentReport.SqlServer.FullBackup"></a>

**API key:** `sqlserver-check-for-full-backup`

This premigration assessment verifies whether the source database has at least one full backup. SQL Server must be configured for full backup, and you must run a backup before replicating data.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has sparse columns and columnar structure compression.
<a name="CHAP_Tasks.AssessmentReport.SqlServer.SparseOrStructureCompression"></a>

**API key:** `sqlserver-check-for-sparse-columns`

This premigration assessment verifies whether the source database has sparse columns and columnar structure compression. DMS doesn't support sparse columns and columnar structure compression.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database instance has server level auditing for SQL Server 2008 or SQL Server 2008 R2
<a name="CHAP_Tasks.AssessmentReport.SqlServer.Audit"></a>

**API key:** `sqlserver-check-for-audit-2008`

This premigration assessment verifies whether the source database has enabled server-level auditing for SQL Server 2008 or SQL Server 2008 R2. DMS has a related known issue with SQL Server 2008 and 2008 R2.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has geometry columns for full LOB mode
<a name="CHAP_Tasks.AssessmentReport.SqlServer.GeometryColumns"></a>

**API key:** `sqlserver-check-for-geometry-columns`

This premigration assessment verifies whether the source database has geometry columns for full Large Object (LOB) mode when using SQL Server as a source. We recommend using limited LOB mode or setting the `InlineLobMaxSize` task setting to use inline LOB mode when your database includes geometry columns.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the source database has columns with the Identity property.
<a name="CHAP_Tasks.AssessmentReport.SqlServer.Identity"></a>

**API key:** `sqlserver-check-for-identity-columns`

This premigration assessment verifies whether the source database has a column with the `IDENTITY` property. DMS doesn't migrate this property to the corresponding target database column.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the DMS user has FULL LOAD permissions
<a name="CHAP_Tasks.AssessmentReport.SqlServer.FullLoadPermissions"></a>

**API key:** `sqlserver-check-user-permission-for-full-load-only`

This premigration assessment verifies whether the DMS task's user has permissions to run the task in FULL LOAD mode.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the DMS user has FULL LOAD and CDC or CDC only permissions
<a name="CHAP_Tasks.AssessmentReport.SqlServer.FullLoadCDCPermissions"></a>

**API key:** `sqlserver-check-user-permission-for-cdc`

This premigration assessment verifies whether the DMS User has permissions to run the task in `FULL LOAD and CDC` or `CDC only` modes.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check whether MS-Replication is enabled for CDC on on-premises or EC2 databases.
<a name="CHAP_Tasks.AssessmentReport.SqlServer.IgnoreMsReplicationEnablement"></a>

**API key:** `sqlserver-check-attribute-for-enable-ms-cdc-onprem`

Check whether MS-Replication is enabled for CDC on on-premises or EC2 databases.

For more information about configuring MS-Replication, see [Capturing data changes for self-managed SQL Server on-premises or on Amazon EC2](CHAP_Source.SQLServer.CDC.md#CHAP_Source.SQLServer.CDC.Selfmanaged).

## Check if the DMS user has the VIEW DEFINITION permission.
<a name="CHAP_Tasks.AssessmentReport.SqlServer.ViewDefinition"></a>

**API key:** `sqlserver-check-user-permission-on-view-definition`

This premigration assessment verifies whether the user specified in the endpoint settings has the `VIEW DEFINITION` permission. DMS requires the `VIEW DEFINITION` permission to view object definitions.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the DMS user has the VIEW DATABASE STATE permission on the MASTER database for users without the Sysadmin role.
<a name="CHAP_Tasks.AssessmentReport.SqlServer.ViewDatabaseState"></a>

**API key:** `sqlserver-check-user-permission-on-view-database-state`

This premigration assessment verifies whether the user specified in the endpoint settings has the `VIEW DATABASE STATE` permission. DMS requires this permission to access database objects in the MASTER database. DMS also requires this permission when the user doesn't have sysadmin privileges. DMS requires this permission to create functions, certificates, and logins, and to grant credentials.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Check if the DMS user has the VIEW SERVER STATE permission.
<a name="CHAP_Tasks.AssessmentReport.SqlServer."></a>

**API key:** `sqlserver-check-user-permission-on-view-server-state`

This premigration assessment checks if the user specified in the extra connection attributes (ECA) has the `VIEW SERVER STATE`permission. `VIEW SERVER STATE` is a server-level permission that allows a user to view server-wide information and state. This permission provides access to dynamic management views (DMVs) and dynamic management functions (DMFs) that expose information about the SQL Server instance. This permission is required for the DMS user to have access to CDC resources. This permission is required to run a DMS task in `FULL LOAD and CDC` or `CDC only` modes.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Validate if text repl size parameter is not unlimited
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.replsizeparameter"></a>

**API Key:**`sqlserver-check-for-max-text-repl-size`

Setting Max text repl size parameter on the database could potentially cause data migration error for LOB columns. DMS highly recommends setting it to -1.

For more information, see [Troubleshooting issues with Microsoft SQL Server](CHAP_Troubleshooting.md#CHAP_Troubleshooting.SQLServer).

## Validate if Primary Key or Unique Index exist on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.batchapply"></a>

**API Key:**`sqlserver-check-batch-apply-target-pk-ui-absence`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables without Primary Keys or Unique Indexes causes the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can create a unique key on the target table.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Validate if both Primary Key and Unique index exist on target when batch apply enabled
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.batchapplysimultaneously"></a>

**API Key:**`sqlserver-check-batch-apply-target-pk-ui-simultaneously`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables with Primary Keys and Unique Indexes simultaneously causes the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can drop unique keys or primary key on the target table and rebuild it when migrating.

For more information, see [Limitations on using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.Limitations).

## Validate if table has primary key or unique index when DMS validation is enabled
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.dmsvalidation"></a>

**API Key:**`sqlserver-check-pk-validity`

Data validation requires that the table has a primary key or unique index on both source and target. 

For more information, see [AWS DMS data validation](CHAP_Validating.md).

## Validate if AWS DMS user has necessary privileges to the target
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.dmsprivileges"></a>

**API Key:**`sqlserver-check-target-privileges`

The AWS DMS user must have must have at least the db\$1owner user role on the target database.

For more information, see [Security requirements when using SQL Server as a target for AWS Database Migration Service](CHAP_Target.SQLServer.md#CHAP_Target.SQLServer.Security).

## Recommendation on using MaxFullLoadSubTasks setting
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.maxfullloadsubtask"></a>

**API Key:**`sqlserver-tblnum-for-max-fullload-subtasks`

This assessment checks the number of tables included in the task and recommends increasing the `MaxFullLoadSubTasks` parameter for optimal performance during the full load process. By default, AWS DMS migrates 8 tables simultaneously. Changing the `MaxFullLoadSubTasks` parameter to a higher value improves the full load performance.

For more information, see [Full-load task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.md).

## Check Transformation Rule for Digits Randomize
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.gigits.randomise"></a>

**API key**: `sqlserver-datamasking-digits-randomize`

This assessment validates whether columns used in table mappings are compatible with the Digits Randomize transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Check Transformation Rule for Digits mask
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.digits.mask"></a>

**API key**: `sqlserver-datamasking-digits-mask`

This assessment validates whether any columns used in the table mapping are not supported by the Digits Mask transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying Digits Mask transformations to such columns could cause DMS task failures since uniqueness cannot be guaranteed.

## Check Transformation Rule for Hashing mask
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.hash.mask"></a>

**API key**: `sqlserver-datamasking-hash-mask`

This assessment validates whether any of the columns used in the table mapping are not supported by the Hashing Mask transformation rule. It also checks if the length of the source column exceeds 64 characters. Ideally, the target column length should be greater than 64 characters to support hash masking. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.all.digits.random"></a>

**API key**: `all-to-all-validation-with-datamasking-digits-randomize`

This premigration assessment verifies that Data Validation setting and Data Masking Digit randomization are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.all.hash.mask"></a>

**API key**: `all-to-all-validation-with-datamasking-hash-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Hashing mask are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.all.digit.mask"></a>

**API key**: `all-to-all-validation-with-digit-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Digit mask are not simultaneously enabled, as these features are incompatible.

## Validate that at least one selected object exists in the source database
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.selection.rules"></a>

**API key**: `all-check-source-selection-rules`

This premigration assessment verifies that at least one object specified in the selection rules exists in the source database, including pattern matching for wildcard-based rules.

## Validate that secondary constraints and indexes (non-primary) are present in the source database
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.secondary.constraints"></a>

**API key**: `all-check-secondary-constraints`

This premigration assessment verifies that secondary constraints and indexes (foreign keys, check constraints, non-clustered indexes) are present in the source database.

## Validate that target endpoint is not a read replica
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.target.replica"></a>

**API key**: `all-check-target-read-replica`

This premigration assessment verifies that the target endpoint is not configured as a read replica. AWS DMS requires write access to the target database and cannot replicate to read-only replicas.

## Validate backup chain
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.backup.chain"></a>

**API key**: `sqlserver-check-for-backup-broken-chain`

This premigration assessment verifies that the source database backup chain is not broken. A broken backup chain can prevent AWS DMS from accessing transaction logs required for CDC replication.

## Check database user permissions for applying `EXCLUSIVE_AUTOMATIC_TRUNCATION` safeguard policy
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.safeguard.permission"></a>

**API key**: `sqlserver-safeguard-permissions`

This premigration assessment verifies whether the database user has the required permissions to use the `EXCLUSIVE_AUTOMATIC_TRUNCATION` safeguard policy. The user must grant SELECT permissions on the `dbo.syscategories` and `dbo.sysjobs` system objects to the dmsuser.

For more information, see [Endpoint settings when using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.ConnectionAttrib).

## Validate that secondary node connection and required safeguard attributes for AWS DMS source endpoint
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.node.safeguard.policy"></a>

**API key**: `sqlserver-check-sec-node-sg-policy`

This premigration assessment verifies that the source endpoint has the required extra connection attributes (ECAs) configured when connecting to a secondary node with safeguards enabled.

For more information, see [Endpoint settings when using SQL Server as a source for AWS DMS](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.ConnectionAttrib).

## Validate that endpoint has all required extra connection attributes (ECAs) when AWS DMS is connected to secondary node
<a name="CHAP_Tasks.AssessmentReport.Sqlserver.node.without.eca"></a>

**API key**: `sqlserver-check-sec-node-without-eca`

This premigration assessment verifies that all required extra connection attributes (ECAs) are configured when the source endpoint connects to a secondary node

For more information, see [Working with self-managed SQL Server AlwaysOn availability groups](CHAP_Source.SQLServer.md#CHAP_Source.SQLServer.AlwaysOn).

# MySQL assessments
<a name="CHAP_Tasks.AssessmentReport.MySQL"></a>

This section describes individual premigration assessments for migration tasks that use a MySQL, Aurora MySQL-Compatible Edition or Aurora MySQL-Compatible Edition Serverless source endpoint.

**Topics**
+ [

## Validate if Binary Log transaction compression is disabled
](#CHAP_Tasks.AssessmentReport.MySQL.BinaryLogTransaction)
+ [

## Validate if DMS user has REPLICATION CLIENT and REPLICATION SLAVE permissions for the source database
](#CHAP_Tasks.AssessmentReport.MySQL.ReplicationClientPermissions)
+ [

## Validate if DMS user has SELECT permissions for the source database tables
](#CHAP_Tasks.AssessmentReport.MySQL.DMSUserSelectPermissions)
+ [

## Validate if the server\$1id is set to 1 or greater in the source database
](#CHAP_Tasks.AssessmentReport.MySQL.ServerID)
+ [

## Validate if DMS user has necessary permissions for the MySQL database as a target
](#CHAP_Tasks.AssessmentReport.MySQL.UserNecessaryPermissions)
+ [

## Validate if automatic removal of binary logs is set for the source database
](#CHAP_Tasks.AssessmentReport.MySQL.BinaryLogAutomaticRemoval)
+ [

## Validate that limited LOB mode only is used when `BatchApplyEnabled` is set to true
](#CHAP_Tasks.AssessmentReport.MySQL.LimitedLOBMode)
+ [

## Validate if a table uses a storage engine other than Innodb
](#CHAP_Tasks.AssessmentReport.MySQL.Innodb)
+ [

## Validate if auto-increment is enabled on any tables used for migration
](#CHAP_Tasks.AssessmentReport.MySQL.AutoIncrement)
+ [

## Validate if the database binlog image is set to `FULL` to support DMS CDC
](#CHAP_Tasks.AssessmentReport.MySQL.BinlogImage)
+ [

## Validate if the source database is a MySQL Read-Replica
](#CHAP_Tasks.AssessmentReport.MySQL.ReadReplica)
+ [

## Validate if a table has partitions, and recommend `target_table_prep_mode` for full-load task settings
](#CHAP_Tasks.AssessmentReport.MySQL.FullLoadTaskSettings)
+ [

## Validate if DMS supports the database version
](#CHAP_Tasks.AssessmentReport.MySQL.DatabaseVersion)
+ [

## Validate if the target database is configured to set `local_infile` to 1
](#CHAP_Tasks.AssessmentReport.MySQL.LocalInfile)
+ [

## Validate if target database has tables with foreign keys
](#CHAP_Tasks.AssessmentReport.MySQL.ForeignKeys)
+ [

## Validate if source tables in the task scope have cascade constraints
](#CHAP_Tasks.AssessmentReport.MySQL.Cascade)
+ [

## Validate if the timeout values are appropriate for a MySQL source or target
](#CHAP_Tasks.AssessmentReport.MySQL.Timeout)
+ [

## Validate `max_statement_time` database parameter
](#CHAP_Tasks.AssessmentReport.MySQL.max_statement_time)
+ [

## Validate if Primary Key or Unique Index exist on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.MySQL.batchapply_absence)
+ [

## Validate if both Primary Key and Unique index exist on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.MySQL.batchapply_simul)
+ [

## Validate if secondary indexes are enabled during full load on the target database
](#CHAP_Tasks.AssessmentReport.MySQL.secondaryindexes)
+ [

## Validate if table has primary key or unique index when DMS validation is enabled
](#CHAP_Tasks.AssessmentReport.MySQL.pk_validity)
+ [

## Recommendation on using `MaxFullLoadSubTasks` setting
](#CHAP_Tasks.AssessmentReport.MySQL.fullload_subtasks)
+ [

## Check Transformation Rule for Digits Randomize
](#CHAP_Tasks.AssessmentReport.MySQL.digits.randomise)
+ [

## Check Transformation Rule for Digits mask
](#CHAP_Tasks.AssessmentReport.MySQL.digits.mask)
+ [

## Check Transformation Rule for Hashing mask
](#CHAP_Tasks.AssessmentReport.MYSQL.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.MYSQL.all.digits.random)
+ [

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.MYSQL.all.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.MYSQL.all.digit.mask)
+ [

## Check if source Amazon Aurora MySQL instance is not a read replica
](#CHAP_Tasks.AssessmentReport.MYSQL.read.only)
+ [

## Check if binary log retention time is set properly
](#CHAP_Tasks.AssessmentReport.MYSQL.retention.time)
+ [

## Check if source tables do not have invisible columns.
](#CHAP_Tasks.AssessmentReport.MYSQL.invisible.columns)
+ [

## Validate if the database binlog format is set to ROW to support DMS CDC
](#CHAP_Tasks.AssessmentReport.MYSQL.binlog.format)
+ [

## Validate that at least one selected object exists in the source database
](#CHAP_Tasks.AssessmentReport.MYSQL.selection.rules)
+ [

## Validate that tables with generated columns exist in the source database
](#CHAP_Tasks.AssessmentReport.MYSQL.generated.columns)
+ [

## Validate that skipTableSuspensionForPartitionDdl is enabled for partitioned tables
](#CHAP_Tasks.AssessmentReport.MYSQL.tablepartition.ddl)
+ [

## Validate that max\$1allowed\$1packet size can handle source LOB columns
](#CHAP_Tasks.AssessmentReport.MYSQL.maxallowed.packetlob)
+ [

## Validate that secondary constraints and indexes (non-primary) are present in the source database
](#CHAP_Tasks.AssessmentReport.MYSQL.secondary.constraints)

## Validate if Binary Log transaction compression is disabled
<a name="CHAP_Tasks.AssessmentReport.MySQL.BinaryLogTransaction"></a>

**API key:** `mysql-check-binlog-compression`

This premigration assessment validates whether binary Log transaction compression is disabled. AWS DMS doesn't support binary log transaction compression.

For more information, see [ Limitations on using a MySQL database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Limitations).

## Validate if DMS user has REPLICATION CLIENT and REPLICATION SLAVE permissions for the source database
<a name="CHAP_Tasks.AssessmentReport.MySQL.ReplicationClientPermissions"></a>

**API key:** `mysql-check-replication-privileges`

This premigration assessment validates whether the DMS user specified in the source endpoint connection settings has `REPLICATION CLIENT` and `REPLICATION SLAVE` permissions for the source database if the DMS task migration type is CDC or full-load \$1 CDC.

For more information, see [ Using any MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites).

## Validate if DMS user has SELECT permissions for the source database tables
<a name="CHAP_Tasks.AssessmentReport.MySQL.DMSUserSelectPermissions"></a>

**API key:** `mysql-check-select-privileges`

This premigration assessment validates whether the DMS user specified in the source endpoint connection settings has SELECT permissions for the source database tables.

For more information, see [ Using any MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites).

## Validate if the server\$1id is set to 1 or greater in the source database
<a name="CHAP_Tasks.AssessmentReport.MySQL.ServerID"></a>

**API key:** `mysql-check-server-id`

This premigration assessment validates whether the `server_id` server variable is set to 1 or greater in the source database for CDC migration type.

For more information about sources for AWS DMS, see [ Using a self-managed MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.CustomerManaged).

## Validate if DMS user has necessary permissions for the MySQL database as a target
<a name="CHAP_Tasks.AssessmentReport.MySQL.UserNecessaryPermissions"></a>

**API key:** `mysql-check-target-privileges`

This premigration assessment validates whether the DMS user specified in the target endpoint connection settings has the necessary permissions for the MySQL database as a target.

For more information about MySQL source endpoint prerequisites, see [ Using any MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites).

## Validate if automatic removal of binary logs is set for the source database
<a name="CHAP_Tasks.AssessmentReport.MySQL.BinaryLogAutomaticRemoval"></a>

**API key:** `mysql-check-expire-logs-days`

This premigration assessment validates whether your database is configured to automatically remove binary logs. The values of either `EXPIRE_LOGS_DAYS` or `BINLOG_EXPIRE_LOGS_SECONDS` global system variables should be greater than zero to prevent overuse of disk space during migration.

For more information about sources for AWS DMS, see [ Using a self-managed MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.CustomerManaged).

## Validate that limited LOB mode only is used when `BatchApplyEnabled` is set to true
<a name="CHAP_Tasks.AssessmentReport.MySQL.LimitedLOBMode"></a>

**API key:** `mysql-batch-apply-lob-mode`

This premigration assessment validates whether the DMS task includes LOB columns. If LOB columns are included into the scope of the task, you must use `BatchApplyEnabled` together with limited LOB mode only.

For more information about the `BatchApplyEnabled` setting, see [ How can I use the DMS batch apply feature to improve CDC replication performance?](https://repost.aws/knowledge-center/dms-batch-apply-cdc-replication).

## Validate if a table uses a storage engine other than Innodb
<a name="CHAP_Tasks.AssessmentReport.MySQL.Innodb"></a>

**API key:** `mysql-check-table-storage-engine`

This premigration assessment validates whether the storage engine used for any table in the Source MySQL database is an engine other than Innodb. DMS creates target tables with the InnoDB storage engine by default. If you need to use a storage engine other than InnoDB, you must manually create the table on the target database and configure your DMS task to use `TRUNCATE_BEFORE_LOAD` or `DO_NOTHING` as the full-load task setting. For more information about full-load task settings, see [Full-load task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.md).

**Note**  
This premigration assessment is not available for Aurora MySQL-Compatible Edition or Aurora MySQL-Compatible Edition Serverless.

For more information about MySQL endpoint limitations, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations).

## Validate if auto-increment is enabled on any tables used for migration
<a name="CHAP_Tasks.AssessmentReport.MySQL.AutoIncrement"></a>

**API key:** `mysql-check-auto-increment`

This premigration assessment validates whether the source tables that are used in the task have auto-increment enabled. DMS doesn't migrate the AUTO\$1INCREMENT attribute on a column to a target database. 

For more information about MySQL endpoint limitations, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations). For information about handling identity columns in MySQL, see [ Handle IDENTITY columns in AWS DMS: Part 2](https://aws.amazon.com/blogs/database/handle-identity-columns-in-aws-dms-part-2/).

## Validate if the database binlog image is set to `FULL` to support DMS CDC
<a name="CHAP_Tasks.AssessmentReport.MySQL.BinlogImage"></a>

**API key:** `mysql-check-binlog-image`

This premigration assessment checks whether the source database's binlog image is set to `FULL`. In MySQL, the `binlog_row_image` variable determines how a binary log event is written when using the `ROW` format. To ensure compatibility with DMS and support CDC, set the `binlog_row_image` variable to `FULL`. This setting ensures that DMS receives sufficient information to construct the full Data Manipulation Language (DML) for the target database during migration.

To set the binlog image to `FULL`, do the following:
+ For Amazon RDS, this value is `FULL` by default.
+ For databases hosed on-premises or on Amazon EC2, set the `binlog_row_image` value in `my.ini` (Microsoft Windows) or `my.cnf` (UNIX).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration. 

## Validate if the source database is a MySQL Read-Replica
<a name="CHAP_Tasks.AssessmentReport.MySQL.ReadReplica"></a>

**API key:** `mysql-check-database-role`

This premigration assessment verifies whether the source database is a read replica. To enable CDC support for DMS when connected to a read replica, set the `log_slave_updates` parameter to `True`. For more information about using a self-managed MySQL database, see [Using a self-managed MySQL-compatible database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.CustomerManaged).

To set the `log_slave_updates` value to `True`, do the following:
+ For Amazon RDS, use the database's parameter group. For information about using RDS database parameter groups, see [ Working with parameter groups ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) in the *Amazon RDS User Guide*.
+ For databases hosed on-premises or on Amazon EC2, set the `log_slave_updates` value in `my.ini` (Microsoft Windows) or `my.cnf` (UNIX).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration. 

## Validate if a table has partitions, and recommend `target_table_prep_mode` for full-load task settings
<a name="CHAP_Tasks.AssessmentReport.MySQL.FullLoadTaskSettings"></a>

**API key:** `mysql-check-table-partition`

This premigration assessment checks for the presence of tables with partitions in the source database. DMS creates tables without partitions on the MySQL target. To migrate partitioned tables to a partitioned table on the target, you must do the following:
+ Pre-create the partitioned tables in the target MySQL database.
+ Configure your DMS task to use `TRUNCATE_BEFORE_LOAD` or `DO_NOTHING` as the full-load task setting.

For more information about MySQL endpoint limitations, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations).

## Validate if DMS supports the database version
<a name="CHAP_Tasks.AssessmentReport.MySQL.DatabaseVersion"></a>

**API key:** `mysql-check-supported-version`

This premigration assessment verifies whether the source database version is compatible with DMS. For more information about supported MySQL versions, see [Source endpoints for data migration](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.DataMigration).

## Validate if the target database is configured to set `local_infile` to 1
<a name="CHAP_Tasks.AssessmentReport.MySQL.LocalInfile"></a>

**API key:** `mysql-check-target-localinfile-set`

 This premigration assessment checks whether the `local_infile` parameter in the target database is set to 1. DMS requires the 'local\$1infile' parameter to be set to 1 during full load in your target database. For more information, see [Migrating from MySQL to MySQL using AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Homogeneous). 

This assessment is only valid for a full-load or full-load and CDC task.

## Validate if target database has tables with foreign keys
<a name="CHAP_Tasks.AssessmentReport.MySQL.ForeignKeys"></a>

**API key:** `mysql-check-fk-target`

This premigration assessment checks whether a full load or full and CDC task migrating to a MySQL database has tables with foreign keys. The default setting in DMS is to load tables in alphabetical order. Tables with foreign keys and referential integrity constraints can cause the load to fail, as the parent and child tables may not be loaded at the same time.

For more information about referential integrity in DMS, see **Working with indexes, triggers, and referential integrity constraints** in the [Improving the performance of an AWS DMS migration](CHAP_BestPractices.md#CHAP_BestPractices.Performance) topic.

## Validate if source tables in the task scope have cascade constraints
<a name="CHAP_Tasks.AssessmentReport.MySQL.Cascade"></a>

**API key:** `mysql-check-cascade-constraints`

This premigration assessment checks if any of the MySQL source tables have cascade constraints. Cascade constraints are not migrated or replicated by DMS tasks, because MySQL doesn't record the changes for these events in the binlog. While AWS DMS doesn't support these constraints, you can use workarounds for relational database targets.

For information about supporting cascase constrains and other constraints, see [Indexes, Foreign Keys, or Cascade Updates or Deletes Not Migrated](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.FKsAndIndexes) in the **Troubleshooting migration tasks in AWS DMS** topic.

## Validate if the timeout values are appropriate for a MySQL source or target
<a name="CHAP_Tasks.AssessmentReport.MySQL.Timeout"></a>

**API key:** `mysql-check-target-network-parameter`

This premigration assessment checks whether a task’s MySQL endpoint has the `net_read_timeout`, `net_write_timeout` and `wait_timeout` settings set to at least 300 seconds. This is needed to prevent disconnects during the migration.

For more information, see [Connections to a target MySQL instance are disconnected during a task](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.ConnectionDisconnect).

## Validate `max_statement_time` database parameter
<a name="CHAP_Tasks.AssessmentReport.MySQL.max_statement_time"></a>

**API key:** `mysql-check-max-statement-time`

Check source parameter - `max_Statement_time` for MySQL based sources. If there are tables larger than 1 billion, validate the value `max_Statement_time` and recommend setting to higher value to avoid any potential data loss.

## Validate if Primary Key or Unique Index exist on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.MySQL.batchapply_absence"></a>

**API key:** `mysql-check-batch-apply-target-pk-ui-absence`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables without Primary Keys or Unique Indexes causes the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can create a unique key on the target table.

For more information, see [Using a MySQL-compatible database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html).

## Validate if both Primary Key and Unique index exist on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.MySQL.batchapply_simul"></a>

**API key:** `mysql-check-batch-apply-target-pk-ui-simultaneously`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables with Primary Keys and Unique Indexes simultaneously causes the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can drop a unique key(s) or primary key on the target table and rebuild it if you are doing migration.

For more information, see [Using a MySQL-compatible database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html).

## Validate if secondary indexes are enabled during full load on the target database
<a name="CHAP_Tasks.AssessmentReport.MySQL.secondaryindexes"></a>

**API key:** `mysql-check-secondary-indexes`

Consider disabling or removing the secondary indexes from the target database. Secondary indexes can affect your migration performance during full load. It is advisable to enable secondary indexes before applying the cached changes.

For more information, see [Best practices for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html).

## Validate if table has primary key or unique index when DMS validation is enabled
<a name="CHAP_Tasks.AssessmentReport.MySQL.pk_validity"></a>

**API key:** `mysql-check-pk-validity`

Data validation requires that the table has a primary key or unique index.

For more information, see [AWS DMS data validation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html).

## Recommendation on using `MaxFullLoadSubTasks` setting
<a name="CHAP_Tasks.AssessmentReport.MySQL.fullload_subtasks"></a>

**API key:** `mysql-tblnum-for-max-fullload-subtasks`

This assessment checks the number of tables included in the task and recommends increasing the `MaxFullLoadSubTasks` parameter for optimal performance during the full load process. By default, AWS DMS migrates 8 tables simultaneously. Changing the `MaxFullLoadSubTasks` parameter to a higher value improves the full load performance.

For more information, see [Full-load task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.html).

## Check Transformation Rule for Digits Randomize
<a name="CHAP_Tasks.AssessmentReport.MySQL.digits.randomise"></a>

**API key**: `mysql-datamasking-digits-randomize`

This assessment validates whether columns used in table mappings are compatible with the Digits Randomize transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Check Transformation Rule for Digits mask
<a name="CHAP_Tasks.AssessmentReport.MySQL.digits.mask"></a>

**API key**: `mysql-datamasking-digits-mask`

This assessment validates whether any columns used in the table mapping are not supported by the Digits Mask transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying Digits Mask transformations to such columns could cause DMS task failures since uniqueness cannot be guaranteed.

## Check Transformation Rule for Hashing mask
<a name="CHAP_Tasks.AssessmentReport.MYSQL.hash.mask"></a>

**API key**: `mysql-datamasking-hash-mask`

This assessment validates whether any of the columns used in the table mapping are not supported by the Hashing Mask transformation rule. It also checks if the length of the source column exceeds 64 characters. Ideally, the target column length should be greater than 64 characters to support hash masking. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.MYSQL.all.digits.random"></a>

**API key**: `all-to-all-validation-with-datamasking-digits-randomize`

This premigration assessment verifies that Data Validation setting and Data Masking Digit randomization are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.MYSQL.all.hash.mask"></a>

**API key**: `all-to-all-validation-with-datamasking-hash-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Hashing mask are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.MYSQL.all.digit.mask"></a>

**API key**: `all-to-all-validation-with-digit-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Digit mask are not simultaneously enabled, as these features are incompatible.

## Check if source Amazon Aurora MySQL instance is not a read replica
<a name="CHAP_Tasks.AssessmentReport.MYSQL.read.only"></a>

**API key**: `mysql-check-aurora-read-only`

This premigration assessment validates whether migrating between two Amazon Aurora MySQL clusters, the source endpoint must be a read/write instance, not a replica instance.

## Check if binary log retention time is set properly
<a name="CHAP_Tasks.AssessmentReport.MYSQL.retention.time"></a>

**API key**: `mysql-check-binlog-retention-time`

This premigration assessment validates whether the value of 'binlog retention hours' is larger than 24 hours.

## Check if source tables do not have invisible columns.
<a name="CHAP_Tasks.AssessmentReport.MYSQL.invisible.columns"></a>

**API key**: `mysql-check-invisible-columns`

This premigration assessment validates whether source tables do not have invisible columns. AWS DMS does not migrate data from invisible columns in your source database.

## Validate if the database binlog format is set to ROW to support DMS CDC
<a name="CHAP_Tasks.AssessmentReport.MYSQL.binlog.format"></a>

**API key**: `mysql-check-binlog-format`

This premigration assessment validates whether the source database binlog format is configured to ROW to support Change Data Capture (CDC). To set the binlog format to ROW, do the following:
+ For Amazon RDS, use the database's parameter group. For information, see [Configuring MySQL binary logging for Single-AZ databases](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQL.BinaryFormat.html) in the Amazon Relational Database Service User Guide. 
+ For databases hosed on-premises or on Amazon EC2, set the `binlog_format` value in `my.ini` (Microsoft Windows) or `my.cnf `(UNIX).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration. For more information about self-hosted MySQL servers, see [Using a self-managed MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.CustomerManaged).

## Validate that at least one selected object exists in the source database
<a name="CHAP_Tasks.AssessmentReport.MYSQL.selection.rules"></a>

**API key**: `all-check-source-selection-rules`

This premigration assessment verifies that at least one object specified in the selection rules exists in the source database, including pattern matching for wildcard-based rules.

## Validate that tables with generated columns exist in the source database
<a name="CHAP_Tasks.AssessmentReport.MYSQL.generated.columns"></a>

**API key**: `mysql-check-generated-columns`

This premigration assessment checks whether any of the MySQL source tables have generated columns. AWS DMS tasks don't migrate or replicate generated columns. For information about how to migrate generated columns, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations).

## Validate that skipTableSuspensionForPartitionDdl is enabled for partitioned tables
<a name="CHAP_Tasks.AssessmentReport.MYSQL.tablepartition.ddl"></a>

**API key**: `mysql-check-skip-table-suspension-partition-ddl`

This premigration assessment detects partitioned tables in the source database and verifies the `skipTableSuspensionForPartitionDdl` parameter setting. Failure to set this parameter may result in unnecessary table suspensions during migration. For more details, refer to the following link: [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations).

## Validate that max\$1allowed\$1packet size can handle source LOB columns
<a name="CHAP_Tasks.AssessmentReport.MYSQL.maxallowed.packetlob"></a>

**API key**: `mysql-check-max-allowed-packet-lob`

AWS DMS detects LOB columns in source tables that exceed your current `max_allowed_packet` setting. This mismatch can cause replication failures during data migration. For more information, see [Troubleshooting issues with MySQL](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL).

## Validate that secondary constraints and indexes (non-primary) are present in the source database
<a name="CHAP_Tasks.AssessmentReport.MYSQL.secondary.constraints"></a>

**API key**: `all-check-secondary-constraints`

This premigration assessment verifies that secondary constraints and indexes (foreign keys, check constraints, non-clustered indexes) are present in the source database.

# MariaDB assessments
<a name="CHAP_Tasks.AssessmentReport.MariaDB"></a>

This section describes individual premigration assessments for migration tasks that use a MariaDB source endpoint.

To create an individual premigration assessment using the AWS DMS API, use the listed API key for the `Include` parameter of the [ StartReplicationTaskAssessmentRun](https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessmentRun.html) action.

**Topics**
+ [

## Validate if the `server_id` is set to 1 or greater in the source database
](#CHAP_Tasks.AssessmentReport.MariaDB.ServerID)
+ [

## Validate if automatic removal of binary logs is set for the source database
](#CHAP_Tasks.AssessmentReport.MariaDB.AutomaticRemovalBinaryLogs)
+ [

## Validate that limited LOB mode only is used when BatchApplyEnabled is set to true
](#CHAP_Tasks.AssessmentReport.MariaDB.LimitedLOBMode)
+ [

## Validate if Binary Log transaction compression is disabled
](#CHAP_Tasks.AssessmentReport.MariaDB.BinaryLogTransactionCompression)
+ [

## Validate if DMS user has REPLICATION CLIENT and REPLICATION SLAVE privileges for the source database
](#CHAP_Tasks.AssessmentReport.MariaDB.ReplicationClientSlavePrivileges)
+ [

## Validate if DMS user has SELECT permissions for the source database tables
](#CHAP_Tasks.AssessmentReport.MariaDB.DMSUserSELECTPermissions)
+ [

## Validate if DMS user has necessary privileges for the MySQL-compatible database as a target
](#CHAP_Tasks.AssessmentReport.MariaDB.DMSUserNecessaryPermissions)
+ [

## Validate if a table uses a storage engine other than Innodb
](#CHAP_Tasks.AssessmentReport.MariaDB.Innodb)
+ [

## Validate if auto-increment is enabled on any tables used for migration
](#CHAP_Tasks.AssessmentReport.MariaDB.AutoIncrement)
+ [

## Validate if the database binlog format is set to `ROW` to support DMS CDC
](#CHAP_Tasks.AssessmentReport.MariaDB.BinlogFormat)
+ [

## Validate if the database binlog image is set to `FULL` to support DMS CDC
](#CHAP_Tasks.AssessmentReport.MariaDB.BinlogImage)
+ [

## Validate if the source database is a MariaDB Read-Replica
](#CHAP_Tasks.AssessmentReport.MariaDB.ReadReplica)
+ [

## Validate if a table has partitions, and recommend `TRUNCATE_BEFORE_LOAD` or `DO_NOTHING` for full-load task settings
](#CHAP_Tasks.AssessmentReport.MariaDB.FullLoadTaskSettings)
+ [

## Validate if DMS supports the database version
](#CHAP_Tasks.AssessmentReport.MariaDB.DatabaseVersion)
+ [

## Validate if the target database is configured to set `local_infile` to 1
](#CHAP_Tasks.AssessmentReport.MariaDB.LocalInfile)
+ [

## Validate if target database has tables with foreign keys
](#CHAP_Tasks.AssessmentReport.MariaDB.ForeignKeys)
+ [

## Validate if source tables in the task scope have cascade constraints
](#CHAP_Tasks.AssessmentReport.MariaDB.Cascade)
+ [

## Validate if source tables in the task scope have generated columns
](#CHAP_Tasks.AssessmentReport.MariaDB.GeneratedColumns)
+ [

## Validate if the timeout values are appropriate for a MariaDB source
](#CHAP_Tasks.AssessmentReport.MariaDB.Timeout.Source)
+ [

## Validate if the timeout values are appropriate for a MariaDB target
](#CHAP_Tasks.AssessmentReport.MariaDB.Timeout.Target)
+ [

## Validate `max_statement_time` database parameter
](#CHAP_Tasks.AssessmentReport.MariaDB.database.parameter)
+ [

## Validate if Primary Key or Unique Index exist on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.MariaDB.batchapply)
+ [

## Validate if both Primary Key and Unique index exist on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.MariaDB.batchapply.simultaneous)
+ [

## Validate if secondary indexes are enabled during full load on the target database
](#CHAP_Tasks.AssessmentReport.MariaDB.secondary.indexes)
+ [

## Validate if table has primary key or unique index when DMS validation is enabled
](#CHAP_Tasks.AssessmentReport.MariaDB.dmsvalidation)
+ [

## Recommendation on using `MaxFullLoadSubTasks` setting
](#CHAP_Tasks.AssessmentReport.MariaDB.maxfullload)
+ [

## Check Transformation Rule for Digits Randomize
](#CHAP_Tasks.AssessmentReport.MariaDB.digits.randomize)
+ [

## Check Transformation Rule for Digits mask
](#CHAP_Tasks.AssessmentReport.MariaDB.digits.mask)
+ [

## Check Transformation Rule for Hashing mask
](#CHAP_Tasks.AssessmentReport.MariaDB.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.MariaDB.all.digits.random)
+ [

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.MariaDB.all.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.MariaDB.all.digit.mask)
+ [

## Check if binary log retention time is set properly
](#CHAP_Tasks.AssessmentReport.MariaDB.retention.time)
+ [

## Check if source tables do not have invisible columns
](#CHAP_Tasks.AssessmentReport.MariaDB.invisible.columns)
+ [

## Validate that at least one selected object exists in the source database
](#CHAP_Tasks.AssessmentReport.MariaDB.selection.rules)
+ [

## Validate that `skipTableSuspensionForPartitionDdl` is enabled for partitioned tables
](#CHAP_Tasks.AssessmentReport.MariaDB.suspension.ddl)
+ [

## Validate that secondary constraints and indexes (non-primary) are present in the source database
](#CHAP_Tasks.AssessmentReport.MariaDB.secondary.constraints)

## Validate if the `server_id` is set to 1 or greater in the source database
<a name="CHAP_Tasks.AssessmentReport.MariaDB.ServerID"></a>

**API key:** `mariadb-check-server-id`

This premigration assessment validates whether the `server_id` server variable is set to 1 or greater in the source database for CDC migration type.

For more information about MariaDB endpoint limitations, see [ Using a self-managed MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.CustomerManaged).

## Validate if automatic removal of binary logs is set for the source database
<a name="CHAP_Tasks.AssessmentReport.MariaDB.AutomaticRemovalBinaryLogs"></a>

**API key:** `mariadb-check-expire-logs-days`

This premigration assessment validates whether your database is configured to automatically remove binary logs. The values of either `EXPIRE_LOGS_DAYS` or `BINLOG_EXPIRE_LOGS_SECONDS` global system variables should be greater than zero to prevent overuse of disk space during migration.

For more information about MariaDB endpoint limitations, see [ Using a self-managed MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.CustomerManaged).

## Validate that limited LOB mode only is used when BatchApplyEnabled is set to true
<a name="CHAP_Tasks.AssessmentReport.MariaDB.LimitedLOBMode"></a>

**API key:** `mariadb-batch-apply-lob-mode`

When LOB columns are included in the replication, you can use `BatchApplyEnabled` in limited LOB mode only. Using other options of the LOB mode will cause the batch to fail, and AWS DMS will process the changes one by one. We recommend that you move these tables to their own tasks and use transactional apply mode instead.

For more information about the `BatchApplyEnabled` setting, see [ How can I use the DMS batch apply feature to improve CDC replication performance?](https://repost.aws/knowledge-center/dms-batch-apply-cdc-replication).

## Validate if Binary Log transaction compression is disabled
<a name="CHAP_Tasks.AssessmentReport.MariaDB.BinaryLogTransactionCompression"></a>

**API key:** `mariadb-check-binlog-compression`

This premigration assessment validates whether Binary Log transaction compression is disabled. AWS DMS doesn't support binary log transaction compression.

For more information, see [ Limitations on using a MySQL database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Limitations).

## Validate if DMS user has REPLICATION CLIENT and REPLICATION SLAVE privileges for the source database
<a name="CHAP_Tasks.AssessmentReport.MariaDB.ReplicationClientSlavePrivileges"></a>

**API key:** `mariadb-check-replication-privileges`

This premigration assessment validates whether the DMS user specified in the source endpoint connection settings has `REPLICATION CLIENT` and `REPLICATION SLAVE` privileges for the source database, if the DMS task migration type is CDC or full-load \$1 CDC.

For more information, see [ Using any MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites).

## Validate if DMS user has SELECT permissions for the source database tables
<a name="CHAP_Tasks.AssessmentReport.MariaDB.DMSUserSELECTPermissions"></a>

**API key:** `mariadb-check-select-privileges`

This premigration assessment validates whether the DMS user specified in the source endpoint connection settings has `SELECT` permissions for the source database tables.

For more information, see [ Using any MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites).

## Validate if DMS user has necessary privileges for the MySQL-compatible database as a target
<a name="CHAP_Tasks.AssessmentReport.MariaDB.DMSUserNecessaryPermissions"></a>

**API key:** `mariadb-check-target-privileges`

This premigration assessment validates whether the DMS user specified in the target endpoint connection settings has necessary privileges for the MySQL-compatible database as a target.

For more information, see [ Using any MySQL-compatible database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html#CHAP_Source.MySQL.Prerequisites).

## Validate if a table uses a storage engine other than Innodb
<a name="CHAP_Tasks.AssessmentReport.MariaDB.Innodb"></a>

**API key:** `mariadb-check-table-storage-engine`

This premigration assessment validates whether the storage engine used for any table in the Source MariaDB database is an engine other than Innodb. DMS creates target tables with the InnoDB storage engine by default. If you need to use a storage engine other than InnoDB, you must manually create the table on the target database and configure your DMS task to use `TRUNCATE_BEFORE_LOAD` or `DO_NOTHING` as the full-load task setting. For more information about full-load task settings, see [Full-load task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.md).

For more information about MariaDB endpoint limitations, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations).

## Validate if auto-increment is enabled on any tables used for migration
<a name="CHAP_Tasks.AssessmentReport.MariaDB.AutoIncrement"></a>

**API key:** `mariadb-check-auto-increment`

This premigration assessment validates whether the source tables that are used in the task have auto-increment enabled. DMS doesn't migrate the AUTO\$1INCREMENT attribute on a column to a target database. 

For more information about MariaDB endpoint limitations, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations). For information about handling identity columns in MariaDB, see [ Handle IDENTITY columns in AWS DMS: Part 2](https://aws.amazon.com/blogs/database/handle-identity-columns-in-aws-dms-part-2/).

## Validate if the database binlog format is set to `ROW` to support DMS CDC
<a name="CHAP_Tasks.AssessmentReport.MariaDB.BinlogFormat"></a>

**API key:** `mariadb-check-binlog-format`

This premigration assessment validates whether the source database binlog format is set to `ROW` to support DMS Change Data Capture (CDC). 

To set the binlog format to `ROW`, do the following:
+ For Amazon RDS, use the database's parameter group. For information about using an RDS parameter group, see [ Configuring MySQL binary logging](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQL.BinaryFormat.html) in the *Amazon RDS User Guide*.
+ For databases hosed on-premises or on Amazon EC2, set the `binlog_format` value in `my.ini` (Microsoft Windows) or `my.cnf` (UNIX).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration. 

For more information about self-hosted MariaDB servers, see [Using a self-managed MySQL-compatible database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.CustomerManaged).

## Validate if the database binlog image is set to `FULL` to support DMS CDC
<a name="CHAP_Tasks.AssessmentReport.MariaDB.BinlogImage"></a>

**API key:** `mariadb-check-binlog-image`

This premigration assessment checks whether the source database's binlog image is set to `FULL`. In MariaDB, the `binlog_row_image` variable determines how a binary log event is written when using the `ROW` format. To ensure compatibility with DMS and support CDC, set the `binlog_row_image` variable to `FULL`. This setting ensures that DMS receives sufficient information to construct the full Data Manipulation Language (DML) for the target database during migration.

To set the binlog image to `FULL`, do the following:
+ For Amazon RDS, this value is `FULL` by default.
+ For databases hosed on-premises or on Amazon EC2, set the `binlog_row_image` value in `my.ini` (Microsoft Windows) or `my.cnf` (UNIX).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration. 

For more information about self-hosted MariaDB servers, see [Using a self-managed MySQL-compatible database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.CustomerManaged).

## Validate if the source database is a MariaDB Read-Replica
<a name="CHAP_Tasks.AssessmentReport.MariaDB.ReadReplica"></a>

**API key:** `mariadb-check-database-role`

This premigration assessment verifies whether the source database is a read replica. To enable CDC support for DMS when connected to a read replica, set the `log_slave_updates` parameter to `True`. For more information about using a self-managed MySQL database, see [Using a self-managed MySQL-compatible database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.CustomerManaged).

To set the `log_slave_updates` value to `True`, do the following:
+ For Amazon RDS, use the database's parameter group. For information about using RDS database parameter groups, see [ Working with parameter groups ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html) in the *Amazon RDS User Guide*.
+ For databases hosed on-premises or on Amazon EC2, set the `log_slave_updates` value in `my.ini` (Microsoft Windows) or `my.cnf` (UNIX).

This assessment is only valid for a full-load and CDC migration, or a CDC-only migration. This assessment is not valid for a full-load only migration. 

## Validate if a table has partitions, and recommend `TRUNCATE_BEFORE_LOAD` or `DO_NOTHING` for full-load task settings
<a name="CHAP_Tasks.AssessmentReport.MariaDB.FullLoadTaskSettings"></a>

**API key:** `mariadb-check-table-partition`

This premigration assessment checks for the presence of tables with partitions in the source database. DMS creates tables without partitions on the MariaDB target. To migrate partitioned tables to a partitioned table on the target, you must do the following:
+ Pre-create the partitioned tables in the target MariaDB database.
+ Configure your DMS task to use `TRUNCATE_BEFORE_LOAD` or `DO_NOTHING` as the full-load task setting.

For more information about MariaDB endpoint limitations, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations).

## Validate if DMS supports the database version
<a name="CHAP_Tasks.AssessmentReport.MariaDB.DatabaseVersion"></a>

**API key:** `mariadb-check-supported-version`

This premigration assessment verifies whether the source database version is compatible with DMS. CDC is not supported with Amazon RDS MariaDB versions 10.4 or lower, or MySQL versions greater than 10.11. For more information about supported MariaDB versions, see [Source endpoints for data migration](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.DataMigration).

## Validate if the target database is configured to set `local_infile` to 1
<a name="CHAP_Tasks.AssessmentReport.MariaDB.LocalInfile"></a>

**API key:** `mariadb-check-target-localinfile-set`

 This premigration assessment checks whether the `local_infile` parameter in the target database is set to 1. DMS requires the 'local\$1infile' parameter to be set to 1 during full load in your target database. For more information, see [Migrating from MySQL to MySQL using AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Homogeneous). 

This assessment is only valid for a full-load task.

## Validate if target database has tables with foreign keys
<a name="CHAP_Tasks.AssessmentReport.MariaDB.ForeignKeys"></a>

**API key:** `mariadb-check-fk-target`

This premigration assessment checks whether a full load or full and CDC task migrating to a MariaDB database has tables with foreign keys. The default setting in DMS is to load tables in alphabetical order. Tables with foreign keys and referential integrity constraints can cause the load to fail, as the parent and child tables may not be loaded at the same time.

For more information about referential integrity in DMS, see **Working with indexes, triggers, and referential integrity constraints** in the [Improving the performance of an AWS DMS migration](CHAP_BestPractices.md#CHAP_BestPractices.Performance) topic.

## Validate if source tables in the task scope have cascade constraints
<a name="CHAP_Tasks.AssessmentReport.MariaDB.Cascade"></a>

**API key:** `mariadb-check-cascade-constraints`

This premigration assessment checks if any of the MariaDB source tables have cascade constraints. Cascade constraints are not migrated or replicated by DMS tasks, because MariaDB doesn't record the changes for these events in the binlog. While AWS DMS doesn't support these constraints, you can use workarounds for relational database targets.

For information about supporting cascase constrains and other constraints, see [Indexes, Foreign Keys, or Cascade Updates or Deletes Not Migrated](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.FKsAndIndexes) in the **Troubleshooting migration tasks in AWS DMS** topic.

## Validate if source tables in the task scope have generated columns
<a name="CHAP_Tasks.AssessmentReport.MariaDB.GeneratedColumns"></a>

**API key:** `mariadb-check-generated-columns`

This premigration assessment checks whether any of the MariaDB source tables have generated columns. DMS tasks don't migrate or replicate generated columns.

For information about how to migrate generated columns, see [Connections to a target MySQL instance are disconnected during a task](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.ConnectionDisconnect).

## Validate if the timeout values are appropriate for a MariaDB source
<a name="CHAP_Tasks.AssessmentReport.MariaDB.Timeout.Source"></a>

**API key:** `mariadb-check-source-network-parameter`

This premigration assessment checks whether a task’s MariaDB source endpoint has the `net_read_timeout`, `net_write_timeout` and `wait_timeout` settings set to at least 300 seconds. This is needed to prevent disconnects during the migration.

For more information, see [Connections to a target MySQL instance are disconnected during a task](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.ConnectionDisconnect).

## Validate if the timeout values are appropriate for a MariaDB target
<a name="CHAP_Tasks.AssessmentReport.MariaDB.Timeout.Target"></a>

**API key:** `mariadb-check-target-network-parameter`

This premigration assessment checks whether a task’s MariaDB target endpoint has the `net_read_timeout`, `net_write_timeout` and `wait_timeout` settings set to at least 300 seconds. This is needed to prevent disconnects during the migration.

For more information, see [Connections to a target MySQL instance are disconnected during a task](CHAP_Troubleshooting.md#CHAP_Troubleshooting.MySQL.ConnectionDisconnect).

## Validate `max_statement_time` database parameter
<a name="CHAP_Tasks.AssessmentReport.MariaDB.database.parameter"></a>

**API key:**`mariadb-check-max-statement-time`

AWS DMS validates that the database source parameter `max_statement_time` is set to a value other than 0. It is required to set this parameter to 0 to accommodate the DMS full load process. You can consider changing the parameter value after the full load is completed, as setting it to a value other than 0 may result is loss of data.

## Validate if Primary Key or Unique Index exist on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.MariaDB.batchapply"></a>

**API key:** `mariadb-check-batch-apply-target-pk-ui-absence`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables without Primary Keys or Unique Indexes will cause the batch to fail, and changes will be processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can create a unique key on the target table. 

For more information, see [Using a MySQL-compatible database as a target for AWS Database Migration Service](CHAP_Target.MySQL.md).

## Validate if both Primary Key and Unique index exist on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.MariaDB.batchapply.simultaneous"></a>

**API key:** `mariadb-check-batch-apply-target-pk-ui-simultaneously`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables with Primary Keys and Unique Indexes simultaneously cause the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can drop a unique key(s) or primary key on the target table and rebuild it if you are doing migration.

For more information, see [Using a MySQL-compatible database as a target for AWS Database Migration Service](CHAP_Target.MySQL.md).

## Validate if secondary indexes are enabled during full load on the target database
<a name="CHAP_Tasks.AssessmentReport.MariaDB.secondary.indexes"></a>

**API key:** `mariadb-check-secondary-indexes`

You must consider disabling or removing the secondary indexes from the target database. Secondary indexes can affect your migration performance during full load. It is advisable to enable secondary indexes before applying the cached changes.

For more information, see [Best practices for AWS Database Migration Service](CHAP_BestPractices.md).

## Validate if table has primary key or unique index when DMS validation is enabled
<a name="CHAP_Tasks.AssessmentReport.MariaDB.dmsvalidation"></a>

**API key:** `mariadb-check-pk-validity`

Data validation requires that the table has a primary key or unique index on both source and target.

For more information, see [AWS DMS data validation](CHAP_Validating.md).

## Recommendation on using `MaxFullLoadSubTasks` setting
<a name="CHAP_Tasks.AssessmentReport.MariaDB.maxfullload"></a>

This assessment checks the number of tables included in the task and recommends increasing the `MaxFullLoadSubTasks` parameter for optimal performance during the full load process. By default, AWS DMS migrates 8 tables simultaneously. Changing the `MaxFullLoadSubTasks` parameter to a higher value will improve the full load performance.

For more information, see [Full-load task settings](CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.md).

## Check Transformation Rule for Digits Randomize
<a name="CHAP_Tasks.AssessmentReport.MariaDB.digits.randomize"></a>

**API key**: `mariadb-datamasking-digits-randomize`

This assessment validates whether columns used in table mappings are compatible with the Digits Randomize transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Check Transformation Rule for Digits mask
<a name="CHAP_Tasks.AssessmentReport.MariaDB.digits.mask"></a>

**API key**: `mariadb-datamasking-digits-mask`

This assessment validates whether any columns used in the table mapping are not supported by the Digits Mask transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying Digits Mask transformations to such columns could cause DMS task failures since uniqueness cannot be guaranteed.

## Check Transformation Rule for Hashing mask
<a name="CHAP_Tasks.AssessmentReport.MariaDB.hash.mask"></a>

**API key**: `mariadb-datamasking-hash-mask`

This assessment validates whether any of the columns used in the table mapping are not supported by the Hashing Mask transformation rule. It also checks if the length of the source column exceeds 64 characters. Ideally, the target column length should be greater than 64 characters to support hash masking. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.MariaDB.all.digits.random"></a>

**API key**: `all-to-all-validation-with-datamasking-digits-randomize`

This premigration assessment verifies that Data Validation setting and Data Masking Digit randomization are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.MariaDB.all.hash.mask"></a>

**API key**: `all-to-all-validation-with-datamasking-hash-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Hashing mask are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.MariaDB.all.digit.mask"></a>

**API key**: `all-to-all-validation-with-digit-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Digit mask are not simultaneously enabled, as these features are incompatible.

## Check if binary log retention time is set properly
<a name="CHAP_Tasks.AssessmentReport.MariaDB.retention.time"></a>

**API key**: `mariadb-check-binlog-retention-time`

This premigration assessment validates whether the value of '`binlog retention hours`' is larger than 24 hours.

## Check if source tables do not have invisible columns
<a name="CHAP_Tasks.AssessmentReport.MariaDB.invisible.columns"></a>

**API key**: `mariadb-check-invisible-columns`

This premigration assessment validates whether source tables do not have invisible columns. AWS DMS does not migrate data from invisible columns in your source database.

## Validate that at least one selected object exists in the source database
<a name="CHAP_Tasks.AssessmentReport.MariaDB.selection.rules"></a>

**API key**: `all-check-source-selection-rules`

This premigration assessment verifies that at least one object specified in the selection rules exists in the source database, including pattern matching for wildcard-based rules.

## Validate that `skipTableSuspensionForPartitionDdl` is enabled for partitioned tables
<a name="CHAP_Tasks.AssessmentReport.MariaDB.suspension.ddl"></a>

**API key**: `mariadb-check-skip-table-suspension-partition-ddl`

This premigration assessment detects partitioned tables in the source database and verifies the `skipTableSuspensionForPartitionDdl` parameter setting. Failure to set this parameter may result in unnecessary table suspensions during migration. 

For more information, see [Limitations on using a MySQL database as a source for AWS DMS](CHAP_Source.MySQL.md#CHAP_Source.MySQL.Limitations).

## Validate that secondary constraints and indexes (non-primary) are present in the source database
<a name="CHAP_Tasks.AssessmentReport.MariaDB.secondary.constraints"></a>

**API key**: `all-check-secondary-constraints`

This premigration assessment verifies that secondary constraints and indexes (foreign keys, check constraints, non-clustered indexes) are present in the source database.

# PostgreSQL assessments
<a name="CHAP_Tasks.AssessmentReport.PG"></a>

This section describes individual premigration assessments for migration tasks that use a PostgreSQL source endpoint.

**Topics**
+ [

## Validate if DDL event trigger is set to ENABLE ALWAYS
](#CHAP_Tasks.AssessmentReport.PG.DDLEventTrigger)
+ [

## Validate if PostGIS columns exist in the source database
](#CHAP_Tasks.AssessmentReport.PG.PostGISColumns)
+ [

## Validate if foreign key constraint is disabled on the target tables during the full-load process
](#CHAP_Tasks.AssessmentReport.PG.ForeignKeyConstraintDisabled)
+ [

## Validate if tables with similar names exist
](#CHAP_Tasks.AssessmentReport.PG.ValidateSimilarNames)
+ [

## Validate if there are tables with ARRAY data type without a primary key
](#CHAP_Tasks.AssessmentReport.PG.ValidateArrayWithoutPrimaryKey)
+ [

## Validate if primary keys or unique indexes exist on the target tables when BatchApplyEnabled is enabled
](#CHAP_Tasks.AssessmentReport.PG.PrimaryKeysUniqueIndexes)
+ [

## Validate if any table of the target database has secondary indexes for the full-load migration task
](#CHAP_Tasks.AssessmentReport.PG.TargetDatabaseSecondaryIndexes)
+ [

## Validate that limited LOB mode only is used when BatchApplyEnabled is set to true
](#CHAP_Tasks.AssessmentReport.PG.LimitedLOBMode)
+ [

## Validate if source database version is supported by DMS for migration
](#CHAP_Tasks.AssessmentReport.PG.SourceVersion)
+ [

## Validate the `logical_decoding_work_mem` parameter on the source database
](#CHAP_Tasks.AssessmentReport.PG.LogicalDecoding)
+ [

## Validate whether the source database has any long running transactions
](#CHAP_Tasks.AssessmentReport.PG.LongRunning)
+ [

## Validate the source database parameter `max_slot_wal_keep_size`
](#CHAP_Tasks.AssessmentReport.PG.)
+ [

## Check if the source database parameter `postgres-check-maxwalsenders` is set to support CDC.
](#CHAP_Tasks.AssessmentReport.PG.MaxWalSenders)
+ [

## Check if the source database is configured for `PGLOGICAL`
](#CHAP_Tasks.AssessmentReport.PG.pglogical)
+ [

## Validate if the source table primary key is of LOB Datatype
](#CHAP_Tasks.AssessmentReport.PG.pklob)
+ [

## Validate if the source table has a primary key
](#CHAP_Tasks.AssessmentReport.PG.pk)
+ [

## Validate if prepared transactions are present on the source database
](#CHAP_Tasks.AssessmentReport.PG.preparedtransactions)
+ [

## Validate if `wal_sender_timeout` is set to a minimum required value to support DMS CDC
](#CHAP_Tasks.AssessmentReport.PG.waltime)
+ [

## Validate if `wal_level` is set to logical on the source database
](#CHAP_Tasks.AssessmentReport.PG.wallevel)
+ [

## Validate if both Primary Key and Unique index exist on target for Batch Apply
](#CHAP_Tasks.AssessmentReport.PG.batchapply)
+ [

## Recommend Max LOB setting when LOB objects are found
](#CHAP_Tasks.AssessmentReport.PG.lobsize)
+ [

## Validate if table has primary key or unique index and its state is well when DMS validation is enabled
](#CHAP_Tasks.AssessmentReport.PG.pkvalidity)
+ [

## Validate if AWS DMS user has necessary privileges to the target
](#CHAP_Tasks.AssessmentReport.PG.targetprivileges)
+ [

## Validates availability of free replication slots for CDC
](#CHAP_Tasks.AssessmentReport.PG.slotscount)
+ [

## Verify DMS User Full Load Permissions
](#CHAP_Tasks.AssessmentReport.PG.object.privileges)
+ [

## Check Transformation Rule for Digits Randomize
](#CHAP_Tasks.AssessmentReport.PG.digits.randomize)
+ [

## Check Transformation Rule for Digits mask
](#CHAP_Tasks.AssessmentReport.PG.digits.mask)
+ [

## Check Transformation Rule for Hashing mask
](#CHAP_Tasks.AssessmentReport.PG.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.PG.all.digit.random)
+ [

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.PG.all.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.PG.all.digit.mask)
+ [

## Validate that at least one selected object exists in the source database
](#CHAP_Tasks.AssessmentReport.PG.selection.rules)
+ [

## Validate that target PostgreSQL database contains generated columns
](#CHAP_Tasks.AssessmentReport.PG.target.generatedcol)
+ [

## Validate that materialized views exist in homogeneous PostgreSQL migrations
](#CHAP_Tasks.AssessmentReport.PG.mat.views)
+ [

## Validate that REPLICA IDENTITY FULL conflicts with pglogical plugin usage
](#CHAP_Tasks.AssessmentReport.PG.repl.identity.full)
+ [

## Validate that secondary constraints and indexes (non-primary) are present in the source database
](#CHAP_Tasks.AssessmentReport.PG.secondary.constraints)
+ [

## Validate CHAR/VARCHAR columns compatibility for migration to Oracle
](#CHAP_Tasks.AssessmentReport.PG.varchar.columns)
+ [

## Validate that `idle_in_transaction_session_timeout` setting is configured on the source database
](#CHAP_Tasks.AssessmentReport.PG.transaction.session)
+ [

## Validate that AWS DMS user has required roles for AWS-managed PostgreSQL databases
](#CHAP_Tasks.AssessmentReport.PG.rds.roles)
+ [

## Validate that the target endpoint is not a read replica
](#CHAP_Tasks.AssessmentReport.PG.read.replica)
+ [

## Verify source Aurora PostgreSQL read replica version
](#CHAP_Tasks.AssessmentReport.PG.Aurorasource.replica.version)
+ [

## Verify source PostgreSQL read replica version
](#CHAP_Tasks.AssessmentReport.PG.source.replica.version)

## Validate if DDL event trigger is set to ENABLE ALWAYS
<a name="CHAP_Tasks.AssessmentReport.PG.DDLEventTrigger"></a>

 **API key:** `postgres-check-ddl-event-trigger` 

 This premigration assessment validates whether the DDL event trigger is set to `ENABLE ALWAYS`. When your source database is also a target for another third–party replication system, DDL changes might not migrate during CDC. This situation can prevent DMS from triggering the the `awsdms_intercept_ddl` event. To work around the situation, modify the trigger on your source database like in the follwoing example: 

```
alter event trigger awsdms_intercept_ddl enable always;
```

For more information, see [ Limitations on using a PostgreSQL database as a DMS source](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Limitations).

## Validate if PostGIS columns exist in the source database
<a name="CHAP_Tasks.AssessmentReport.PG.PostGISColumns"></a>

 **API key:** `postgres-check-postgis-data-type` 

 This premigration assessment validates whether the columns of PostGIS data type that exist in case source and target engines are different. AWS DMS supports the PostGIS data type only for homogeneous (like-to-like) migrations. 

For more information, see [ Limitations on using a PostgreSQL database as a DMS source](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Limitations).

## Validate if foreign key constraint is disabled on the target tables during the full-load process
<a name="CHAP_Tasks.AssessmentReport.PG.ForeignKeyConstraintDisabled"></a>

 **API key:** `postgres-check-session-replication-role` 

 This premigration assessment validates whether the `session_replication_role parameter` is set to `REPLICA` on the target to disable foreign key constraints during full-load phase. For full-load migration types, you should disabled foreign key constraints. 

For more information about PostgreSQL endpoint limitations, see [ Using a PostgreSQL database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html).

## Validate if tables with similar names exist
<a name="CHAP_Tasks.AssessmentReport.PG.ValidateSimilarNames"></a>

 **API key:** `postgres-check-similar-table-name` 

 This premigration assessment validates whether there are tables with similar names on the source. Having multiple tables with the same name written in different case can cause unpredictable behaviors during replication. 

For more information about PostgreSQL endpoint limitations, see [ Limitations on using a PostgreSQL database as a DMS source](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Limitations).

## Validate if there are tables with ARRAY data type without a primary key
<a name="CHAP_Tasks.AssessmentReport.PG.ValidateArrayWithoutPrimaryKey"></a>

 **API key:** `postgres-check-table-with-array` 

 This premigration assessment validates whether there are tables with array data type without a primary key. A table with an `ARRAY` data type missing a primary key is ignored during the full-load. 

For more information about PostgreSQL endpoint limitations, see [ Limitations on using a PostgreSQL database as a DMS source](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Limitations).

## Validate if primary keys or unique indexes exist on the target tables when BatchApplyEnabled is enabled
<a name="CHAP_Tasks.AssessmentReport.PG.PrimaryKeysUniqueIndexes"></a>

 **API key:** `postgres-check-batch-apply-target-pk-ui-absence` 

 Batch apply is only supported on tables with primary keys or unique indexes on the target table. Tables without primary keys or unique indexes will cause the batch to fail, and AWS DMS will process the changes one by one. We recommend that you create separate tasks for such tables and use transactional apply mode instead. Alternatively, you can create a unique key on the target table. 

For more information, see [ Using a PostgreSQL database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html).

## Validate if any table of the target database has secondary indexes for the full-load migration task
<a name="CHAP_Tasks.AssessmentReport.PG.TargetDatabaseSecondaryIndexes"></a>

 **API key:** `postgres-check-target-secondary-indexes` 

 This premigration assessment validates whether there are tables with secondary indexes in the scope of the full-load migration task. We recommend that you drop the secondary indexes for the duration of the full-load task. 

For more information, see [ Using a PostgreSQL database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html).

## Validate that limited LOB mode only is used when BatchApplyEnabled is set to true
<a name="CHAP_Tasks.AssessmentReport.PG.LimitedLOBMode"></a>

 **API key:** `postgres-batch-apply-lob-mode` 

 When LOB columns are included in the replication, you can use `BatchApplyEnabled` in limited LOB mode only. Using other options of the LOB mode will cause the batch to fail, and AWS DMS will process changes one by one. We recommend that you move these tables to their own tasks and use transactional apply mode instead. 

For more information about the `BatchApplyEnabled` setting, see [ How can I use the DMS batch apply feature to improve CDC replication performance?](https://repost.aws/knowledge-center/dms-batch-apply-cdc-replication).

## Validate if source database version is supported by DMS for migration
<a name="CHAP_Tasks.AssessmentReport.PG.SourceVersion"></a>

**API key:** `postgres-check-dbversion`

This premigration assessment verifies whether the source database version is compatible with AWS DMS.

## Validate the `logical_decoding_work_mem` parameter on the source database
<a name="CHAP_Tasks.AssessmentReport.PG.LogicalDecoding"></a>

**API key:** `postgres-check-for-logical-decoding-work-mem` 

This premigration assessment recommends tuning the `logical_decoding_work_mem` parameter on the source database. On a highly transactional database where you might have long running transactions or many sub-transactions, it may result in increased logical decoding memory consumption and the need to spill to disk. This results in high DMS source latency during replication. In such scenarios, you might need to tune `logical_decoding_work_mem`. This parameter is supported in PostgreSQL versions 13 and greater.

## Validate whether the source database has any long running transactions
<a name="CHAP_Tasks.AssessmentReport.PG.LongRunning"></a>

**API key:** `postgres-check-longrunningtxn` 

This premigration assessment verifies whether the source database has any long running transactions which lasted more than 10 minutes. Starting the task might fail, because by default, DMS checks for any open transactions while starting the task.

## Validate the source database parameter `max_slot_wal_keep_size`
<a name="CHAP_Tasks.AssessmentReport.PG."></a>

**API key:** `postgres-check-maxslot-wal-keep-size` 

This premigration assessment verifies the value configured for `max_slot_wal_keep_size`. When `max_slot_wal_keep_size` is set to a non-default value, the DMS task may fail due to the removal of required WAL files.

## Check if the source database parameter `postgres-check-maxwalsenders` is set to support CDC.
<a name="CHAP_Tasks.AssessmentReport.PG.MaxWalSenders"></a>

**API key:** `postgres-check-maxwalsenders` 

This premigration assessment verifies the value configured for `max_wal_senders` on the source database. DMS requires `max_wal_senders` to be set greater than 1 to support Change Data Capture (CDC).

## Check if the source database is configured for `PGLOGICAL`
<a name="CHAP_Tasks.AssessmentReport.PG.pglogical"></a>

**API key:** `postgres-check-pglogical` 

 This premigration assessment verifies if the `shared_preload_libraries` value is set to `pglogical` to support `PGLOGICAL` for CDC. Note that you can ignore this assessment if you are planning to use test decoding for logical replication.

## Validate if the source table primary key is of LOB Datatype
<a name="CHAP_Tasks.AssessmentReport.PG.pklob"></a>

**API key:** `postgres-check-pk-lob` 

This premigration assessment verifies if a table's primary key is of Large Object (LOB) datatype. DMS does not support replication if the source table has an LOB column as a primary key. 

## Validate if the source table has a primary key
<a name="CHAP_Tasks.AssessmentReport.PG.pk"></a>

**API key:** `postgres-check-pk` 

This premigration assessment verifies if primary keys exist for the tables used in the task scope. DMS doesn’t support replication for tables without primary keys, unless the replica identity is set to `full` on the source table. 

## Validate if prepared transactions are present on the source database
<a name="CHAP_Tasks.AssessmentReport.PG.preparedtransactions"></a>

**API key:** `postgres-check-preparedtxn` 

This premigration assessment verifies if there are any prepared transactions present on the source database. Replication slot creation might stop responding if there are any prepared transactions on the source database.

## Validate if `wal_sender_timeout` is set to a minimum required value to support DMS CDC
<a name="CHAP_Tasks.AssessmentReport.PG.waltime"></a>

**API key:** `postgres-check-walsenderstimeout` 

This premigration assessment verifies if `wal_sender_timeout` is set to a minimum of 10000 milliseconds (10 seconds). A DMS task with CDC requires a minimum of 10000 milliseconds (10 seconds), and fails if the value is less than 10000. 

## Validate if `wal_level` is set to logical on the source database
<a name="CHAP_Tasks.AssessmentReport.PG.wallevel"></a>

**API key:** `postgres-check-wallevel`

 This premigration assessment verifies if `wal_level` is set to logical. For DMS CDC to work, this parameter needs to be enabled on the source database.

## Validate if both Primary Key and Unique index exist on target for Batch Apply
<a name="CHAP_Tasks.AssessmentReport.PG.batchapply"></a>

**API key:** `postgres-check-batch-apply-target-pk-ui-simultaneously`

Batch apply is only supported on tables with Primary Keys or Unique Indexes on the target table. Tables with Primary Keys and Unique Indexes simultaneously cause the batch to fail, and changes are processed one by one. It is advisable to move such tables to their own tasks and utilize transactional apply mode instead. Alternatively, you can drop a unique key(s) or primary key on the target table and rebuild it if you are doing migration.

For more information, see [Enabling CDC using a self-managed PostgreSQL database as a AWS DMS source](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.Prerequisites.CDC).

## Recommend Max LOB setting when LOB objects are found
<a name="CHAP_Tasks.AssessmentReport.PG.lobsize"></a>

**API key:** `postgres-check-limited-lob-size`

The LOB Size Calculation for PostgreSQL is different from other Engine. Ensure you are setting the right Maximum LOB size on your task setting to avoid any data truncation.

For more information, see [AWS DMS data validation](CHAP_Validating.md).

## Validate if table has primary key or unique index and its state is well when DMS validation is enabled
<a name="CHAP_Tasks.AssessmentReport.PG.pkvalidity"></a>

**API key:** `postgres-check-pk-validity`

Data validation requires that the table has a primary key or unique index.

For more information, see [AWS DMS data validation](CHAP_Validating.md).

## Validate if AWS DMS user has necessary privileges to the target
<a name="CHAP_Tasks.AssessmentReport.PG.targetprivileges"></a>

**API key:** `postgres-check-target-privileges`

The AWS DMS user must have at least the db\$1owner user role on the target database.

For more information, see [Security requirements when using a PostgreSQL database as a target for AWS Database Migration Service](CHAP_Target.PostgreSQL.md#CHAP_Target.PostgreSQL.Security).

## Validates availability of free replication slots for CDC
<a name="CHAP_Tasks.AssessmentReport.PG.slotscount"></a>

**API key**: `postgres-check-replication-slots-count`

This assessment validates whether replication slots are available for CDC to replicate changes.

## Verify DMS User Full Load Permissions
<a name="CHAP_Tasks.AssessmentReport.PG.object.privileges"></a>

**API key**: `postgres-check-select-object-privileges`

This assessment validates whether the DMS user has the necessary SELECT privileges on tables required for Full Load operations.

## Check Transformation Rule for Digits Randomize
<a name="CHAP_Tasks.AssessmentReport.PG.digits.randomize"></a>

**API key**: `postgres-datamasking-digits-randomize`

This assessment validates whether columns used in table mappings are compatible with the Digits Randomize transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Check Transformation Rule for Digits mask
<a name="CHAP_Tasks.AssessmentReport.PG.digits.mask"></a>

**API key**: `postgres-datamasking-digits-mask`

This assessment validates whether any columns used in the table mapping are not supported by the Digits Mask transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying Digits Mask transformations to such columns could cause DMS task failures since uniqueness cannot be guaranteed.

## Check Transformation Rule for Hashing mask
<a name="CHAP_Tasks.AssessmentReport.PG.hash.mask"></a>

**API key**: `postgres-datamasking-hash-mask`

This assessment validates whether any of the columns used in the table mapping are not supported by the Hashing Mask transformation rule. It also checks if the length of the source column exceeds 64 characters. Ideally, the target column length should be greater than 64 characters to support hash masking. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.PG.all.digit.random"></a>

**API key**: `all-to-all-validation-with-datamasking-digits-randomize`

This premigration assessment verifies that Data Validation setting and Data Masking Digit randomization are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.PG.all.hash.mask"></a>

**API key**: `all-to-all-validation-with-datamasking-hash-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Hashing mask are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.PG.all.digit.mask"></a>

**API key**: `all-to-all-validation-with-digit-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Digit mask are not simultaneously enabled, as these features are incompatible.

## Validate that at least one selected object exists in the source database
<a name="CHAP_Tasks.AssessmentReport.PG.selection.rules"></a>

**API key**: `all-check-source-selection-rules`

This premigration assessment verifies that at least one object specified in the selection rules exists in the source database, including pattern matching for wildcard-based rules.

## Validate that target PostgreSQL database contains generated columns
<a name="CHAP_Tasks.AssessmentReport.PG.target.generatedcol"></a>

**API key**: `postgres-check-target-generated-cols`

This premigration assessment validates whether the target PostgreSQL database contains any generated columns (including both STORED and VIRTUAL types) that may require special handling during migration. Generated columns, which compute their values from other columns, need specific verification to ensure compatibility with the target PostgreSQL version and proper data consistency after migration. 

## Validate that materialized views exist in homogeneous PostgreSQL migrations
<a name="CHAP_Tasks.AssessmentReport.PG.mat.views"></a>

**API key**: `postgres-check-materialized-views`

When migrating between PostgreSQL databases, AWS DMS cannot migrate materialized views. Materialized views must be manually created on your target database after migration.

For more information, see [Limitations on using a PostgreSQL database as a DMS source](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.Limitations).

## Validate that REPLICA IDENTITY FULL conflicts with pglogical plugin usage
<a name="CHAP_Tasks.AssessmentReport.PG.repl.identity.full"></a>

**API key**: `postgres-check-pglogical-replica-identity-full`

This premigration assessment detects tables using REPLICA IDENTITY FULL. While REPLICA IDENTITY FULL is supported using the test\$1decoding plugin, using it with pglogical will prevent updates from being replicated correctly. Either change the REPLICA IDENTITY setting to DEFAULT/INDEX, or switch to a test\$1decoding plugin to maintain REPLICA IDENTITY FULL

For more information, see [Enabling change data capture (CDC) using logical replication](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.Security).

## Validate that secondary constraints and indexes (non-primary) are present in the source database
<a name="CHAP_Tasks.AssessmentReport.PG.secondary.constraints"></a>

**API key**: `all-check-secondary-constraints`

This premigration assessment verifies that secondary constraints and indexes (foreign keys, check constraints, non-clustered indexes) are present in the source database.

## Validate CHAR/VARCHAR columns compatibility for migration to Oracle
<a name="CHAP_Tasks.AssessmentReport.PG.varchar.columns"></a>

**API key**: `postgres-to-oracle-check-varchar-columns`

This premigration assessment verifies that NCHAR/NVARCHAR2 data type columns used in the target database are compatible with CHAR/VARCHAR columns in the source database.

## Validate that `idle_in_transaction_session_timeout` setting is configured on the source database
<a name="CHAP_Tasks.AssessmentReport.PG.transaction.session"></a>

**API key**: `postgres-check-idle-in-transaction-session-timeout`

This premigration assessment verifies that the `idle_in_transaction_session_timeout` parameter is not set to 0 on the source database.

## Validate that AWS DMS user has required roles for AWS-managed PostgreSQL databases
<a name="CHAP_Tasks.AssessmentReport.PG.rds.roles"></a>

**API key**: `postgres-check-rds-roles`

This premigration assessment verifies that the AWS DMS user has been configured with all required roles for AWS-managed PostgreSQL databases. Insufficient roles can cause migration task failures.

## Validate that the target endpoint is not a read replica
<a name="CHAP_Tasks.AssessmentReport.PG.read.replica"></a>

**API key**: `all-check-target-read-replica`

This premigration assessment verifies that the target endpoint is not configured as a read replica. AWS DMS requires write access to the target database and cannot replicate to read-only replicas.

## Verify source Aurora PostgreSQL read replica version
<a name="CHAP_Tasks.AssessmentReport.PG.Aurorasource.replica.version"></a>

**API key**: `postgres-aurora-check-source-replica-role-cdc`

This premigration assessment verifies that the source endpoint uses an Aurora PostgreSQL read replica running version 16 or later. CDC operations require replication slots, which Aurora PostgreSQL does not support on read-only nodes in versions earlier than 16.

For more information, see [Read replica as a source for PostgreSQL](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.ReadReplica).

## Verify source PostgreSQL read replica version
<a name="CHAP_Tasks.AssessmentReport.PG.source.replica.version"></a>

**API key**: `postgres-check-source-replica-role-cdc`

This premigration assessment verifies that the source endpoint uses a PostgreSQL read replica running version 16 or later. CDC operations require replication slots, which PostgreSQL does not support on read-only nodes in versions earlier than 16.

For more information, see [Read replica as a source for PostgreSQL](CHAP_Source.PostgreSQL.md#CHAP_Source.PostgreSQL.ReadReplica).

# Db2 LUW Assessments
<a name="CHAP_Tasks.AssessmentReport.Db2"></a>

This section describes individual premigration assessments for migration tasks that use a Db2 LUW source endpoint.

**Topics**
+ [

## Validate if the IBM Db2 LUW database is configured to be recoverable.
](#CHAP_Tasks.AssessmentReport.Db2.config.param)
+ [

## Validate if the DMS user has the required permissions on the source database to perform a full-load
](#CHAP_Tasks.AssessmentReport.Db2.load.privileges)
+ [

## Validate if the DMS user has the required permissions on the source database to perform CDC
](#CHAP_Tasks.AssessmentReport.Db2.cdc.privileges)
+ [

## Validate if the source IBM Db2 LUW source table has Db2 XML data type
](#CHAP_Tasks.AssessmentReport.Db2.xml.data.type)
+ [

## Validate if the source IBM Db2 LUW version is supported by AWS DMS
](#CHAP_Tasks.AssessmentReport.Db2.supported.version.source)
+ [

## Validate if the target IBM Db2 LUW version is supported by AWS DMS
](#CHAP_Tasks.AssessmentReport.Db2.supported.version.target)
+ [

## Check Transformation Rule for Digits Randomize
](#CHAP_Tasks.AssessmentReport.Db2.digits.randomise)
+ [

## Check Transformation Rule for Digits mask
](#CHAP_Tasks.AssessmentReport.Db2.digits.mask)
+ [

## Check Transformation Rule for Hashing mask
](#CHAP_Tasks.AssessmentReport.Db2.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Db2.all.digits.random)
+ [

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Db2.all.hash.mask)
+ [

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
](#CHAP_Tasks.AssessmentReport.Db2.all.digit.mask)
+ [

## Verify that target tables have the correct index configuration (Primary Key or Unique Index, not both) for Batch Apply compatibility
](#CHAP_Tasks.AssessmentReport.Db2.pk.absence)
+ [

## Validate that only ‘Limited LOB mode’ is used when `BatchApplyEnabled` is set to true
](#CHAP_Tasks.AssessmentReport.Db2.lob.mode)
+ [

## Validate if secondary indexes are disabled on the target database during full-load
](#CHAP_Tasks.AssessmentReport.secondary.indexes)
+ [

## Validate that at least one selected object exists in the source database
](#CHAP_Tasks.AssessmentReport.Db2.selection.rules)
+ [

## Validate that secondary constraints and indexes (non-primary) are present in the source database
](#CHAP_Tasks.AssessmentReport.Db2.secondary.constraints)

## Validate if the IBM Db2 LUW database is configured to be recoverable.
<a name="CHAP_Tasks.AssessmentReport.Db2.config.param"></a>

**API key**: `db2-check-archive-config-param`

This premigration assessment validates whether the Db2 LUW database has either or both of the database configuration parameters `LOGARCHMETH1` and `LOGARCHMETH2` set to **ON**.

## Validate if the DMS user has the required permissions on the source database to perform a full-load
<a name="CHAP_Tasks.AssessmentReport.Db2.load.privileges"></a>

**API key**: `db2-check-full-load-privileges`

This premigration assessment validates whether DMS user has all the required permissions on the source database for full-load operations.

## Validate if the DMS user has the required permissions on the source database to perform CDC
<a name="CHAP_Tasks.AssessmentReport.Db2.cdc.privileges"></a>

**API key**: `db2-check-cdc-privileges`

This premigration assessment validates whether DMS user has all the required permissions on the source database for CDC operations.

## Validate if the source IBM Db2 LUW source table has Db2 XML data type
<a name="CHAP_Tasks.AssessmentReport.Db2.xml.data.type"></a>

**API key**: `db2-check-xml-data-type`

This premigration assessment validates if the source IBM Db2 LUW table has Db2 XML data type.

## Validate if the source IBM Db2 LUW version is supported by AWS DMS
<a name="CHAP_Tasks.AssessmentReport.Db2.supported.version.source"></a>

**API key**: `db2-validate-supported-versions-source`

This premigration assessment validates if the source IBM Db2 LUW version is supported by AWS DMS.

## Validate if the target IBM Db2 LUW version is supported by AWS DMS
<a name="CHAP_Tasks.AssessmentReport.Db2.supported.version.target"></a>

**API key**: `db2-validate-supported-versions-target`

This premigration assessment validates if the target IBM Db2 LUW version is supported by AWS DMS.

## Check Transformation Rule for Digits Randomize
<a name="CHAP_Tasks.AssessmentReport.Db2.digits.randomise"></a>

**API key**: `db2-datamasking-digits-randomize`

This assessment validates whether columns used in table mappings are compatible with the Digits Randomize transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Check Transformation Rule for Digits mask
<a name="CHAP_Tasks.AssessmentReport.Db2.digits.mask"></a>

**API key**: `db2-datamasking-digits-mask`

This assessment validates whether any columns used in the table mapping are not supported by the Digits Mask transformation rule. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying Digits Mask transformations to such columns could cause DMS task failures since uniqueness cannot be guaranteed.

## Check Transformation Rule for Hashing mask
<a name="CHAP_Tasks.AssessmentReport.Db2.hash.mask"></a>

**API key**: `db2-datamasking-hash-mask`

This assessment validates whether any of the columns used in the table mapping are not supported by the Hashing Mask transformation rule. It also checks if the length of the source column exceeds 64 characters. Ideally, the target column length should be greater than 64 characters to support hash masking. Additionally, the assessment checks if any columns selected for transformation are part of primary keys, unique constraints, or foreign keys, as applying digits randomize transformations does not guarantee any uniqueness.

## Verify that Data Validation task settings and Data Masking Digit randomization are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Db2.all.digits.random"></a>

**API key**: `all-to-all-validation-with-datamasking-digits-randomize`

This premigration assessment verifies that Data Validation setting and Data Masking Digit randomization are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Hashing mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Db2.all.hash.mask"></a>

**API key**: `all-to-all-validation-with-datamasking-hash-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Hashing mask are not simultaneously enabled, as these features are incompatible.

## Verify that Data Validation task settings and Data Masking Digit mask are not enabled simultaneously
<a name="CHAP_Tasks.AssessmentReport.Db2.all.digit.mask"></a>

**API key**: `all-to-all-validation-with-digit-mask`

This premigration assessment verifies that Data Validation setting and Data Masking Digit mask are not simultaneously enabled, as these features are incompatible.

## Verify that target tables have the correct index configuration (Primary Key or Unique Index, not both) for Batch Apply compatibility
<a name="CHAP_Tasks.AssessmentReport.Db2.pk.absence"></a>

**API key**: `db2-check-batch-apply-target-pk-ui-absence`

Batch Apply requires the target tables to have either Primary or Unique keys, but not both. If a table contains both Primary and Unique Key, the apply mode changes from batch to transactional.

## Validate that only ‘Limited LOB mode’ is used when `BatchApplyEnabled` is set to true
<a name="CHAP_Tasks.AssessmentReport.Db2.lob.mode"></a>

**API key**: `db2-check-for-batch-apply-lob-mode`

This premigration assessment validates whether DMS task includes LOB columns. If LOB columns are included in the scope of the task, you must use ‘Limited LOB mode’ in order to be able to use `BatchApplyEnabled=true`.

## Validate if secondary indexes are disabled on the target database during full-load
<a name="CHAP_Tasks.AssessmentReport.secondary.indexes"></a>

**API key**: `db2-check-secondary-indexes`

This premigration assessment validates whether secondary indexes are disabled during a full-load on the target database. You must disable or remove the secondary indexes during full-load.

## Validate that at least one selected object exists in the source database
<a name="CHAP_Tasks.AssessmentReport.Db2.selection.rules"></a>

**API key**: `all-check-source-selection-rules`

This premigration assessment verifies that at least one object specified in the selection rules exists in the source database, including pattern matching for wildcard-based rules.

## Validate that secondary constraints and indexes (non-primary) are present in the source database
<a name="CHAP_Tasks.AssessmentReport.Db2.secondary.constraints"></a>

**API key**: `all-check-secondary-constraints`

This premigration assessment verifies that secondary constraints and indexes (foreign keys, check constraints, non-clustered indexes) are present in the source database.

# Starting and viewing data type assessments (Legacy)
<a name="CHAP_Tasks.DataTypeAssessments"></a>

**Note**  
This section describes legacy content. We recommend that you use premigration assessment runs, described prior in [Specifying, starting, and viewing premigration assessment runs](CHAP_Tasks.PremigrationAssessmentRuns.md).  
Data type assessments are not available in the console. You can only run data type assessments using the API or CLI, and you can only view the results of a data type assessment in the task's S3 bucket.  
 The Pre-migration Assessment will automatically run under these conditions:   
 During Start Task: If you haven't manually run the assessment during task creation. 
 During Resume Task: If no completed assessment exists within the past 7 days. 

A data type assessment identifies data types in a source database that might not get migrated correctly because the target doesn't support them. During this assessment, AWS DMS reads the source database schemas for a migration task and creates a list of the column data types. It then compares this list to a predefined list of data types supported by AWS DMS. If your migration task has unsupported data types, AWS DMS creates a report that you can look at to see if your migration task has any unsupported data types. AWS DMS doesn't create a report if your migration task doesn't have any unsupported data types.

AWS DMS supports creating data type assessment reports for the following relational databases:
+ Oracle
+ SQL Server 
+ PostgreSQL
+ MySQL
+ MariaDB
+ Amazon Aurora

You can start and view a data type assessment report using the CLI and SDKs to access the AWS DMS API:
+ The CLI uses the [https://docs.aws.amazon.com/cli/latest/reference/dms/start-replication-task-assessment](https://docs.aws.amazon.com/cli/latest/reference/dms/start-replication-task-assessment) command to start a data type assessment and uses the [https://docs.aws.amazon.com/cli/latest/reference/dms/describe-replication-task-assessment-results](https://docs.aws.amazon.com/cli/latest/reference/dms/describe-replication-task-assessment-results) command to view the latest data type assessment report in JSON format.
+ The AWS DMS API uses the [https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessment.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessment.html) operation to start a data type assessment and uses the [https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeReplicationTaskAssessmentResults.html](https://docs.aws.amazon.com/dms/latest/APIReference/API_DescribeReplicationTaskAssessmentResults.html) operation to view the latest data type assessment report in JSON format.

The data type assessment report is a single JSON file that includes a summary that lists the unsupported data types and the column count for each one. It includes a list of data structures for each unsupported data type including the schemas, tables, and columns that have the unsupported data type. You can use the report to modify the source data types and improve the migration success.

There are two levels of unsupported data types. Data types that appear on the report as not supported can't be migrated. Data types that appear on the report as partially supported might be converted to another data type, but not migrate as you expect.

The following example shows a sample data type assessment report that you might view.

```
{
            "summary":{
            "task-name":"test15",
            "not-supported":{
            "data-type": [
            "sql-variant"
            ],
            "column-count":3
            },
            "partially-supported":{
            "data-type":[
            "float8",
            "jsonb"
            ],
            "column-count":2
            }
            },
            "types":[
            {
            "data-type":"float8",
            "support-level":"partially-supported",
            "schemas":[
            {
            "schema-name":"schema1",
            "tables":[
            {
            "table-name":"table1",
            "columns":[
            "column1",
            "column2"
            ]
            },
            {
            "table-name":"table2",
            "columns":[
            "column3",
            "column4"
            ]
            }
            ]
            },
            {
            "schema-name":"schema2",
            "tables":[
            {
            "table-name":"table3",
            "columns":[
            "column5",
            "column6"
            ]
            },
            {
            "table-name":"table4",
            "columns":[
            "column7",
            "column8"
            ]
            }
            ]
            }
            ]
            },
            {
            "datatype":"int8",
            "support-level":"partially-supported",
            "schemas":[
            {
            "schema-name":"schema1",
            "tables":[
            {
            "table-name":"table1",
            "columns":[
            "column9",
            "column10"
            ]
            },
            {
            "table-name":"table2",
            "columns":[
            "column11",
            "column12"
            ]
            }
            ]
            }
            ]
            }
            ]
            }
```

AWS DMS stores the latest and all previous data type assessments in an Amazon S3 bucket created by AWS DMS in your account. The Amazon S3 bucket name has the following format, where *customerId* is your customer ID and *customerDNS* is an internal identifier.

```
dms-customerId-customerDNS
```

**Note**  
By default, you can create up to 100 Amazon S3 buckets in each of your AWS accounts. Because AWS DMS creates a bucket in your account, make sure that it doesn't exceed your bucket limit. Otherwise, the data type assessment fails.

All data type assessment reports for a given migration task are stored in a bucket folder named with the task identifier. Each report's file name is the date of the data type assessment in the format yyyy-mm-dd-hh-mm. You can view and compare previous data type assessment reports from the Amazon S3 Management Console.

AWS DMS also creates an AWS Identity and Access Management (IAM) role to allow access to the S3 bucket created for these reports. The role name is `dms-access-for-tasks`. The role uses the `AmazonDMSRedshiftS3Role` policy. If a **ResourceNotFoundFault** error occurs when you run `StartReplicationTaskAssessment`, see [ResourceNotFoundFault](CHAP_Tasks.AssessmentReport.Troubleshooting.md#CHAP_Tasks.AssessmentReport.Troubleshooting.ResourceNotFoundFault) in the Troubleshooting section for information about creating the `dms-access-for-tasks` role manually.

# Troubleshooting assessment runs
<a name="CHAP_Tasks.AssessmentReport.Troubleshooting"></a>

Following, you can find topics about troubleshooting issues with running assessment reports with AWS Database Migration Service. These topics can help you to resolve common issues.

**Topics**
+ [

## ResourceNotFoundFault when running StartReplicationTaskAssessment
](#CHAP_Tasks.AssessmentReport.Troubleshooting.ResourceNotFoundFault)

## ResourceNotFoundFault when running StartReplicationTaskAssessment
<a name="CHAP_Tasks.AssessmentReport.Troubleshooting.ResourceNotFoundFault"></a>

You may encounter the following exception when running the [StartReplicationTaskAssessment](https://docs.aws.amazon.com/dms/latest/APIReference/API_StartReplicationTaskAssessment.html) action.

```
An error occurred (ResourceNotFoundFault) when calling the StartReplicationTaskAssessment operation: Task assessment has not been run or dms-access-for-tasks IAM Role not configured correctly
```

If you encounter this exception, create the **dms-access-for-tasks** role by doing the following:

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. On the **Select trusted entity** page, for **Trusted entity type**, choose **Custom trust policy**. 

1. Paste the following JSON in the editor, replacing the existing text.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "1",
               "Effect": "Allow",
               "Principal": {
                   "Service": "dms.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

   The preceding policy grants the `sts:AssumeRole` permission to AWS DMS. When you add the **AmazonDMSRedshiftS3Role** policy, DMS can to create the S3 bucket in your account, and put the data type assessment results into this S3 bucket.

1. Choose **Next**.

1. On the **Add permissions** page, search for and add the **AmazonDMSRedshiftS3Role** policy. Choose **Next**.

1. On the **Name, review, and create** page, name the role **dms-access-for-tasks**. Choose **Create role**.

# Specifying supplemental data for task settings
<a name="CHAP_Tasks.TaskData"></a>

When you create or modify a replication task for some AWS DMS endpoints, the task might require additional information to perform the migration. You can specify this additional information using an option in the DMS console. Or you can specify it using the `TaskData` parameter for the DMS API operation `CreateReplicationTask` or `ModifyReplicationTask`.

If your target endpoint is Amazon Neptune, you need to specify mapping data, supplemental to table mapping. This supplemental mapping data specifies how to convert source relational data into the target graph data that a Neptune database can consume. In this case, you can use one of two possible formats. For more information, see [Specifying graph-mapping rules using Gremlin and R2RML for Amazon Neptune as a target](CHAP_Target.Neptune.md#CHAP_Target.Neptune.GraphMapping).