

# Converting database schemas using DMS Schema Conversion
<a name="CHAP_SchemaConversion"></a>

**Note**  
DMS Schema Conversion with generative AI feature is now available. For more information, see [Viewing your database migration assessment report for DMS Schema Conversion](assessment-reports-view.md) and [Converting database schemas in DMS Schema Conversion: step-by-step guide](schema-conversion-convert.md). 

DMS Schema Conversion in AWS Database Migration Service (AWS DMS) makes database migrations between different types of databases more predictable. Use DMS Schema Conversion to assess the complexity of your migration for your source data provider, and to convert database schemas and code objects. You can then apply the converted code to your target database.

DMS Schema Conversion automatically converts your source database schemas and most of the database code objects to a format compatible with the target database. This conversion includes tables, views, stored procedures, functions, data types, synonyms, and so on. Any objects that DMS Schema Conversion can't convert automatically are clearly marked. To complete the migration, you can convert these objects manually.

At a high level, [DMS Schema Conversion](https://aws.amazon.com/dms/schema-conversion-tool/) operates with the following three components: instance profiles, data providers, and migration projects. An *instance profile* specifies network and security settings. A *data provider* stores database connection credentials. A *migration project* contains data providers, an instance profile, and migration rules. AWS DMS uses data providers and an instance profile to design a process that converts database schemas and code objects.

For the list of supported source databases, see [Sources for DMS Schema Conversion](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.SchemaConversion).

For the list of supported target databases, see [Targets for DMS Schema Conversion](CHAP_Introduction.Targets.md#CHAP_Introduction.Targets.SchemaConversion).

The following diagram illustrates the DMS Schema Conversion process.

![\[An architecture diagram of the DMS Schema Conversion feature.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/dms-schema-conversion-diagram.png)


Use the following topics to better understand how to use DMS Schema Conversion.

**Topics**
+ [Supported AWS Regions](#schema-conversion-supported-regions)
+ [Schema conversion features](#schema-conversion-features)
+ [Schema conversion limitations](#schema-conversion-limitations)
+ [Getting started with DMS Schema Conversion](getting-started.md)
+ [Setting up a network for DMS Schema Conversion](instance-profiles-network.md)
+ [Creating source data providers in DMS Schema Conversion](data-providers-source.md)
+ [Creating and setting target data providers in DMS Schema Conversion](data-providers-target.md)
+ [Virtual data provider](virtual-data-provider.md)
+ [Managing migration projects in DMS Schema Conversion](sc-migration-projects.md)
+ [Creating database migration assessment reports with DMS Schema Conversion](assessment-reports.md)
+ [Using DMS Schema Conversion](schema-conversion.md)
+ [Using extension packs in DMS Schema Conversion](extension-pack.md)
+ [AWS IAM actions to API mapping for DMS Schema Conversion and Common Studio Framework (CSF)](schema-conversion-iam.mapping.md)

## Supported AWS Regions
<a name="schema-conversion-supported-regions"></a>

You can create a DMS Schema Conversion migration project in the following AWS Regions. In other Regions, you can use the AWS Schema Conversion Tool. For more information about AWS SCT, see the [AWS Schema Conversion Tool User Guide](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/).


| Region Name | Region | 
| --- | --- | 
| Africa (Cape Town)  | af-south-1 | 
| Asia Pacific (Hong Kong) | ap-east-1 | 
| Asia Pacific (Mumbai) | ap-south-1 | 
| Asia Pacific (Hyderabad) | ap-south-2 | 
| Asia Pacific (Tokyo) | ap-northeast-1 | 
| Asia Pacific (Seoul) | ap-northeast-2 | 
| Asia Pacific (Singapore) | ap-southeast-1 | 
| Asia Pacific (Sydney) | ap-southeast-2 | 
| Asia Pacific (Jakarta) | ap-southeast-3 | 
| Asia Pacific (Melbourne) | ap-southeast-4 | 
| Canada (Central) | ca-central-1 | 
| Canada West (Calgary) | ca-west-1 | 
| Europe (Frankfurt) | eu-central-1 | 
| Europe (Zurich) | eu-central-2 | 
| Europe (Stockholm) | eu-north-1 | 
| Europe (Milan) | eu-south-1 | 
| Europe (Spain) | eu-south-2 | 
| Europe (Ireland) | eu-west-1 | 
| Europe (Paris) | eu-west-3 | 
| Israel (Tel Aviv) | il-central-1 | 
| Middle East (UAE) | me-central-1 | 
| South America (São Paulo) | sa-east-1 | 
| US East (N. Virginia) | us-east-1 | 
| US East (Ohio) | us-east-2 | 
| US West (N. California) | us-west-1 | 
| US West (Oregon) | us-west-2 | 

## Schema conversion features
<a name="schema-conversion-features"></a>

DMS Schema Conversion provides the following features:
+ DMS Schema Conversion automatically manages the AWS Cloud resources that are required for your database migration project. These resources include instance profiles, data providers, and AWS Secrets Manager secrets. They also include AWS Identity and Access Management (IAM) roles, Amazon S3 buckets, and migration projects.
+ You can use DMS Schema Conversion to connect to your source database, read the metadata, and create database migration assessment reports. You can then save the report to an Amazon S3 bucket. With these reports, you get a summary of your schema conversion tasks and the details for items that DMS Schema Conversion can't automatically convert to your target database. Database migration assessment reports help evaluate how much of your migration project DMS Schema Conversion can automate. These reports also help to estimate the amount of manual effort that is required to complete the conversion. For more information, see [Creating database migration assessment reports with DMS Schema Conversion](assessment-reports.md). 
+ After you connect to your source and target data providers, DMS Schema Conversion can convert your existing source database schemas to the target database engine. You can choose any schema item from your source database to convert. After you convert your database code in DMS Schema Conversion, you can review your source code and the converted code. You can save the converted SQL code to an Amazon S3 bucket.
+ Before you convert your source database schemas, you can set up transformation rules. You can use transformation rules to change the data type of columns, move objects from one schema to another, and change the names of objects. You can apply transformation rules to databases, schemas, tables, and columns. For more information, see [ Setting up transformation rules](schema-conversion-transformation-rules.md).
+ You can change conversion settings to improve the performance of the converted code. These settings are specific for each conversion pair and depend on the features of the source database that you use in your code. For more information, see [Specifying schema conversion settings](schema-conversion-settings.md).
+ In some cases, DMS Schema Conversion can't convert source database features to equivalent Amazon RDS features. For these cases, DMS Schema Conversion creates an extension pack in your target database to emulate the features that weren't converted. For more information, see [Using extension packs](extension-pack.md).
+ You can apply the converted code and the extension pack schema to your target database. For more information, see [Applying your converted code](schema-conversion-save-apply.md#schema-conversion-apply).  
+ DMS Schema Conversion supports all of the features in the latest AWS SCT release. For more information, see [ The latest release notes for AWS SCT ](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_ReleaseNotes.html).
+ You can edit converted SQL code before DMS migrates it to the target database. For more information, see [Editing and saving your converted SQL code](schema-conversion-convert.md#schema-conversion-convert-editsql).

## Schema conversion limitations
<a name="schema-conversion-limitations"></a>

DMS Schema Conversion is a web-version of the AWS Schema Conversion Tool (AWS SCT). DMS Schema Conversion supports fewer database platforms and provides more limited functionality compared to the AWS SCT desktop application. To convert data warehouse schemas, big data frameworks, application SQL code, and ETL processes, use AWS SCT. For more information about AWS SCT, see the [AWS Schema Conversion Tool User Guide](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/).

The following limitations apply when you use DMS Schema Conversion for database schema conversion:
+ You can't save a migration project and use it in an offline mode.
+ You can't edit SQL code for the source in a migration project for DMS Schema Conversion. To edit the SQL code of your source database, use your regular SQL editor. Choose **Refresh from database** to add the updated code in your migration project.
+ Migration rules in DMS Schema Conversion don't support changing column collation. You can't use migration rules to move objects to a new schema.
+ You can't apply filters to your source and target database trees to display only those database objects that meet the filter clause.
+ DMS Schema Conversion extension pack doesn't include AWS Lambda functions that emulate email sending, job scheduling, and other features in your converted code.
+ DMS Schema Conversion doesn't use customer-managed KMS keys for access to any customer AWS resources. For example, DMS Schema Conversion doesn't support using a customer-managed KMS key to access customer data in Amazon S3.

# Getting started with DMS Schema Conversion
<a name="getting-started"></a>

To get started with DMS Schema Conversion, use the following tutorial. In it, you can learn to set up DMS Schema Conversion, create a migration project, and connect to your data providers. Then, you can learn to assess the complexity of your migration, and convert your source database to a format compatible with your target database. In addition, you can learn to apply the converted code to your target database.

The following tutorial covers the prerequisite tasks and demonstrates the conversion of an Amazon RDS for SQL Server database to Amazon RDS for MySQL. You can use any of the supported source and target data providers. For more information, see [Source data providers for DMS Schema Conversion](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.SchemaConversion).

For more information about DMS Schema Conversion, read the step-by-step migration walkthroughs for [Oracle to PostgreSQL](https://docs.aws.amazon.com/dms/latest/sbs/schema-conversion-oracle-postgresql.html) and [SQL Server to MySQL](https://docs.aws.amazon.com/dms/latest/sbs/schema-conversion-sql-server-mysql.html) migrations.

The following video introduces the DMS Schema Conversion user interface and helps you get familiar with the core components of this service.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/ki3r92Cfqwk/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/ki3r92Cfqwk)


**Topics**
+ [Complete prerequisites for working with DMS Schema Conversion](set-up.md)
+ [Create an instance profile for DMS Schema Conversion](getting-started-instance.md)
+ [Configure your data providers for DMS Schema Conversion](getting-started-data-providers.md)
+ [Create a migration project in DMS Schema Conversion](getting-started-project.md)
+ [Create an assessment report in DMS Schema Conversion](getting-started-assessment.md)
+ [Convert your source code in DMS Schema Conversion](getting-started-convert.md)
+ [Apply the converted code for DMS Schema Conversion](getting-started-apply.md)
+ [Clean up and troubleshoot to work with DMS Schema Conversion](getting-started-clean-up.md)

# Complete prerequisites for working with DMS Schema Conversion
<a name="set-up"></a>

To set up DMS Schema Conversion, complete the following tasks. Then you can set up an instance profile, add data providers, and create a migration project.

**Topics**
+ [Create a VPC based on Amazon VPC](#set-up-vpc)
+ [Create an Amazon S3 bucket](#set-up-s3-bucket)
+ [Store database credentials in AWS Secrets Manager](#set-up-secrets)
+ [Create IAM policies](#set-up-iam-policies)
+ [Create IAM roles](#set-up-iam-roles)

## Create a VPC based on Amazon VPC
<a name="set-up-vpc"></a>

In this step, you create a virtual private cloud (VPC) in your AWS account. This VPC is based on the Amazon Virtual Private Cloud (Amazon VPC) service and contains your AWS resources.

**To create a VPC for DMS Schema Conversion**

1. Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **Create VPC**.

1. On the **Create VPC** page, enter the following settings:
   + **Resources to create** – **VPC and more**
   + **Name tag auto-generation** – Choose **Auto-generate** and enter a globally unique name. For example, enter **sc-vpc**.
   + **IPv4 CIDR block** – **10.0.1.0/24**
   + **NAT gateways** – **In 1 AZ**
   + **VPC endpoints** – **None**

1. Keep the rest of the settings as they are, and then choose **Create VPC**.

1. Choose **Subnets**, and take a note of your public and private subnet IDs.

   To connect to your Amazon RDS databases, create a subnet group that includes public subnets.

   To connect to your on-premises databases, create a subnet group that includes private subnets. For more information, see [Create an instance profile for DMS Schema Conversion](getting-started-instance.md).

1. Choose **NAT gateways**. Choose your **NAT gateway** and take a note of your **Elastic IP address**.

   Configure your network to make sure that AWS DMS can access your source on-premises database from this NAT gateway's public IP address. For more information, see [Using an internet connection to a VPC](instance-profiles-network.md#instance-profiles-network-internet).

Use this VPC when you create your instance profile and target databases on Amazon RDS.

## Create an Amazon S3 bucket
<a name="set-up-s3-bucket"></a>

To store information from your migration project, create an Amazon S3 bucket. DMS Schema Conversion uses this Amazon S3 bucket to save items such as assessment reports, converted SQL code, information about database schema objects, and so on.

**To create an Amazon S3 bucket for DMS Schema Conversion**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose **Create bucket**.

1. On the **Create bucket** page, select a globally unique name for your S3 bucket. For example, enter **sc-s3-bucket**.

1. For **AWS Region**, choose your Region.

1. For **Bucket Versioning**, choose **Enable**.

1. Keep the rest of the settings as they are, and then choose **Create bucket**.

**Note**  
DMS Schema Conversion (SC) works only with S3 buckets that use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3). Other encryption configurations, including Server-Side Encryption with AWS KMS (SSE-KMS), Customer-Provided Keys (SSE-C), or Client-Side Encryption, are not currently supported by SC and prevents it from accessing the S3 bucket. If using a different encryption method, you must create a separate S3 bucket with SSE-S3 for use with DMS Schema Conversion.

## Store database credentials in AWS Secrets Manager
<a name="set-up-secrets"></a>

Store your source and target database credentials in AWS Secrets Manager. Make sure that you replicate these secrets to your AWS Region. DMS Schema Conversion uses these secrets to connect to your databases in the migration project.

**To store your database credentials in AWS Secrets Manager**

1. Sign in to the AWS Management Console and open the AWS Secrets Manager console at [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/).

1. Choose **Store a new secret**.

1. The **Choose secret type** page opens. For **Secret type**, choose the type of database credentials to store:
   + **Credentials for Amazon RDS database** – Choose this option to store credentials for your Amazon RDS database. For **Credentials**, enter the credentials for your database. For **Database**, choose your database.
   + **Credentials for other database** – Choose this option to store credentials for your source Oracle or SQL Server databases. For **Credentials**, enter the credentials for your database.
   + **Other type of secret** – Choose this option to store only the user name and password to connect to your database. Choose **Add row** to add two key-value pairs. Make sure that you use **username** and **password** for key names. For values related to these keys, enter the credentials for your database.

1. For **Encryption key**, choose the AWS KMS key that Secrets Manager uses to encrypt the secret value. Choose **Next**.

1. On the **Configure secret** page, enter a descriptive **Secret name**. For example, enter **sc-source-secret** or **sc-target-secret**. 

1. Choose **Replicate secret** and then for **AWS Region** choose your Region. Choose **Next**.

1. On the **Configure rotation** page, choose **Next**.

1. On the **Review** page, review your secret details, and then choose **Store**.

To store credentials for your source and target databases, repeat these steps.

## Create IAM policies
<a name="set-up-iam-policies"></a>

**To create an IAM policy for DMS Schema Conversion to access Amazon S3**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Select service** page, choose **Amazon S3** from the list.

1. In the **Actions allowed**, choose `PutObject`, `GetObject`, `GetObjectVersion`, `GetBucketVersioning`, `GetBucketLocation`, `ListBucket`.

1. In the **Resources** specify the ARN of the bucket that you created in the previous section. Choose **Next**.

1. On the **Review and create** page, enter a descriptive name. For example: `sc-s3-policy`. Then, choose **Create policy**.

**To create an IAM policy for DMS Schema Conversionto access AWS Secrets Manager**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Select service** page, choose **Secrets Manager** from the list.

1. Choose **Next**. The **Add permissions** page opens.

1. In the **Actions allowed**, choose: `GetSecretValue` and `DescribeSecret`.

1. On the **Review and create** page, enter a descriptive name. For example: `sc-secrets-manager-policy`. Then, choose **Create policy**.

## Create IAM roles
<a name="set-up-iam-roles"></a>

Create AWS Identity and Access Management (IAM) roles to use in your migration project. DMS Schema Conversion uses these IAM roles to access your Amazon S3 bucket and database credentials stored in AWS Secrets Manager.

**To create an IAM role that provides access to your Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. On the **Select trusted entity** page, choose **AWS service**. Choose **DMS**.

1. Choose **Next**. The **Add permissions** page opens.

1. For **Filter policies**, enter **S3**. Choose the **sc-s3-policy** policy that you created in the previous section.

1. Choose **Next**. The **Name, review, and create** page opens.

1. For **Role name**, enter a descriptive name. For example, enter **sc-s3-role**. Choose **Create role**.

1. On the **Roles** page, enter **sc-s3-role** for **Role name**. Choose **sc-s3-role**.

1. On the **sc-s3-role page**, choose the **Trust relationships** tab. Choose **Edit trust policy**. 

1. Thes AWS DMS regional service principal has the following format:

   ```
   dms.region-name.amazonaws.com
   ```

   Replace `region-name` with the name of your Region, such as `us-east-1`: The following code example shows the principal for the `us-east-1` Region:

   ```
   dms.us-east-1.amazonaws.com
   ```

   The following code example shows a trust policy for accessing AWS DMS schema conversion:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "dms.us-east-1.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

**To create an IAM role that provides access to AWS Secrets Manager**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. On the **Select trusted entity** page, choose **AWS service**. Choose **DMS**.

1. Choose **Next**. The **Add permissions** page opens.

1. For **Filter policies**, enter `s3`. Choose the **sc-secrets-manager-policy** that you created in the previous section.

1. Choose **Next**. The **Name, review, and create** page opens.

1. For **Role name**, enter a descriptive name. For example, enter **sc-secrets-manager-role**. Choose **Create role**.

1. On the **Roles** page, enter **sc-secrets-manager-role** for **Role name**. Choose **sc-secrets-manager-role**.

1. On the **sc-secrets-manager-role** page, choose the **Trust relationships tab**. Choose **Edit trust policy**.

1. On the **Edit trust policy** page, edit the trust relationships for the role to use `schema-conversion.dms.amazonaws.com` and your AWS DMS regional service principal as the trusted entities. This AWS DMS regional service principal has the following format:

   ```
   dms.region-name.amazonaws.com
   ```

   Replace `region-name` with the name of your Region, such as `us-east-1`: The following code example shows the principal for the `us-east-1` Region:

   ```
   dms.us-east-1.amazonaws.com
   ```

   The following code example shows a trust policy for accessing AWS DMS schema conversion:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "dms.us-east-1.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

**To create the `dms-vpc-role` IAM role for use with the AWS CLI or AWS DMS API**

1. Create a JSON file with the following IAM policy. Name the JSON file `dmsAssumeRolePolicyDocument.json`.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "dms.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

   Then, create the role using the AWS CLI using the following command:

   ```
   aws iam create-role --role-name dms-vpc-role --assume-role-policy-document file://dmsAssumeRolePolicyDocument.json                    
   ```

1. Attach the `AmazonDMSVPCManagementRole` policy to `dms-vpc-role` using the following command:

   ```
   aws iam attach-role-policy --role-name dms-vpc-role --policy-arn arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole   
   ```

**To create the `dms-cloudwatch-logs-role` IAM role for use with the AWS CLI or AWS DMS API**

1. Create a JSON file with the following IAM policy. Name the JSON file `dmsAssumeRolePolicyDocument2.json`.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "dms.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

   Then, create the role using the AWS CLI using the following command:

   ```
   aws iam create-role --role-name dms-cloudwatch-logs-role --assume-role-policy-document file://dmsAssumeRolePolicyDocument2.json
   ```

1. Attach the `AmazonDMSCloudWatchLogsRole` policy to `dms-cloudwatch-logs-role` using the following command:

   ```
   aws iam attach-role-policy --role-name dms-cloudwatch-logs-role --policy-arn arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole
   ```

# Create an instance profile for DMS Schema Conversion
<a name="getting-started-instance"></a>

Before you create an instance profile, configure a subnet group for your instance profile. For more information about creating a subnet group for your AWS DMS migration project, see [Creating a subnet group](subnet-group.md).

You can create an instance profile as described in the following procedure. In this instance profile, you specify network and security settings for your DMS Schema Conversion project.

**To create an instance profile**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. In the navigation pane, choose **Instance profiles**, and then choose **Create instance profile**.

1. For **Name**, enter a unique name for your instance profile. For example, enter **sc-instance**.

1. For **Network type**, choose **IPv4** to create an instance profile that supports only IPv4 addressing. To create an instance profile that supports IPv4 and IPv6 addressing, choose **Dual-stack mode**.

1. For **Virtual private cloud (VPC)**, choose the VPC that you created in the prerequisites step.

1. For **Subnet group**, choose the subnet group for your instance profile. To connect to Amazon RDS databases, use a subnet group that includes public subnets. To connect to on-premises databases, use a subnet group that includes private subnets.

1. Choose **Create instance profile**.

To create a migration project, use this instance profile.

# Configure your data providers for DMS Schema Conversion
<a name="getting-started-data-providers"></a>

Next, you create data providers that describe your source and target databases. For each data provider, you specify a data store type and location information. You don't store your database credentials in a data provider.

**To create a data provider for an on-premises database**

1. Sign in to the AWS Management Console, and open the AWS DMS console.

1. In the navigation pane, choose **Data providers**, and then choose **Create data provider**.

1. For **Name**, enter a unique name for your source data provider. For example, enter `sc-dp`.

1. For **Purpose**, select **Schema Conversion**.

1. For **Engine type**, choose the type of database engine for your data provider.

1. Turn on **Virtual mode** if you want to use schema conversion without connecting to a target database. For more information, see [Virtual data provider](virtual-data-provider.md).

1. If **Virtual Mode** is turned on, connection information will be preset automatically using defaults. Note that a virtual data provider can only be used as a target in a migration project or in schema conversion. For more information on virtual mode, see [Virtual data provider](virtual-data-provider.md).

   If **Virtual mode** is off, provide your connection information for the source database. The connection parameters depend on your database engine. For more information, see [ Creating data providers](data-providers-create.md).

1. Select **Engine configuration**. Then select **Enter manually**. Provide your connection information for the source database. The connection parameters depend on your source database engine. For more information, see [ Creating data providers](data-providers-create.md).

1. For **Secure Socket Layer (SSL) mode**, choose the type of SSL enforcement.

1. Choose **Create data provider**.

**To create a data provider for an Amazon RDS database**

1. Sign in to the AWS Management Console, and open the AWS DMS console.

1. In the navigation pane, choose **Data providers**, and then choose **Create data provider**.

1. For **Name**, enter a unique name for your source data provider. For example, enter `sc-dp`.

1. For **Purpose**, select **Schema Conversion**.

1. For **Engine type**, choose the type of database engine for your data provider.

1. Turn **Virtual Mode** on, if you intend to use this data provider for schema conversion without connecting to a target database. For more information on virtual mode, see [Virtual data provider](virtual-data-provider.md).

1. Select **Engine configuration**. Select **Choose RDS database instance from list**.

1. For **Database from RDS**, choose **Browse**, and choose your database. DMS Schema Conversion automatically retrieves the information about the engine type, server name, and port.

1. For **Database name**, enter the name of your database.

1. For **Secure Socket Layer (SSL) mode**, choose the type of SSL enforcement.

1. Choose **Create data provider**.

# Create a migration project in DMS Schema Conversion
<a name="getting-started-project"></a>

Now you can create a migration project. In the migration project, you specify your source and target data providers, and your instance profile.

**To create a migration project**

1. Choose **Migration projects**, and then choose **Create migration project**.

1. For **Name**, enter a unique name for your migration project. For example, enter **sc-project**.

1. For **Instance profile**, choose **sc-instance**.

1. For **Source**, choose **Browse**, and then choose **sc-source**.

1. For **Secret ID**, choose **sc-source-secret**.

1. For **IAM role**, choose **sc-secrets-manager-role**.

1. For **Target**, choose **Browse**, and then choose **sc-target**.

1. For **Secret ID**, choose **sc-target-secret**.

1. For **IAM role**, choose **sc-secrets-manager-role**.

1. Choose **Create migration project**.

# Create an assessment report in DMS Schema Conversion
<a name="getting-started-assessment"></a>

To assess the complexity of the migration, create the database migration assessment report. This report includes the list of all database objects that DMS Schema Conversion can't convert automatically.

**To create an assessment report**

1. Choose **Migration projects**, and then choose **sc-project**.

1. Choose **Schema conversion**, and then choose **Launch schema conversion**.

1. In the source database pane, choose the database schema to assess. Also, select the check box for the name of this schema.

1. In the source database pane, choose **Assess** in the **Actions** menu. The **Assess** dialog box appears. 

1. Choose **Assess** in the dialog box to confirm your choice.

   The **Summary** tab shows the number of items that DMS Schema Conversion can automatically convert for database storage objects and database code objects.

1. Choose **Action items** to see the list of all database objects that DMS Schema Conversion can't convert automatically. Review the recommended actions for each item.

1. To save a copy of your assessment report, choose **Export results**. Next, choose one of the following formats: **CSV** or **PDF**. The **Export** dialog box appears.

1. Choose **Export** to confirm your choice.

1. Choose **S3 bucket**. The Amazon S3 console opens.

1. Choose **Download** to save your assessment report.

# Convert your source code in DMS Schema Conversion
<a name="getting-started-convert"></a>

You can convert your source database schema using the following procedure. Then you can save the converted code as SQL scripts in a text file.

**To convert your database schema**

1. In the source database pane, choose the database schema to convert. Also, select the check box for the name of this schema.

1. In the source database pane, choose **Convert** in the **Actions** menu. The **Convert** dialog box appears.

1. Choose **Convert** in the dialog box to confirm your choice.

1. Choose a database object in the source database pane. DMS Schema Conversion displays the source code and the converted code for this object. You can edit the converted SQL code for a database object using the Edit SQL feature. For more information, see [Editing and saving your converted SQL code](schema-conversion-convert.md#schema-conversion-convert-editsql).

1. In the target database pane, choose the converted database schema. Also, select the check box for the name of this schema.

1. For **Actions**, choose **Save as SQL**. The **Save** dialog box appears.

1. Choose **Save as SQL** to confirm your choice.

1. Choose **S3 bucket**. The Amazon S3 console opens. 

1. Choose **Download** to save your SQL scripts.

# Apply the converted code for DMS Schema Conversion
<a name="getting-started-apply"></a>

DMS Schema Conversion doesn't immediately apply the converted code to your target database. To update your target database, you can use the SQL scripts that you created in the previous step. Alternatively, use the following procedure to apply the converted code from DMS Schema Conversion.

**To apply the converted code**

1. In the target database pane, choose the converted database schema. Also, select the check box for the name of this schema.

1. For **Actions**, choose **Apply changes**. The **Apply changes** dialog box appears.

1. Choose **Apply** to confirm your choice.

# Clean up and troubleshoot to work with DMS Schema Conversion
<a name="getting-started-clean-up"></a>

You can use Amazon CloudWatch to review or share your DMS Schema Conversion logs.

**To review DMS Schema Conversion logs**

1. Sign in to the AWS Management Console and open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. Choose **Logs**, **Log groups**. 

   The name of your DMS Schema Conversion log group starts with `dms-tasks-sct`. You can sort the log groups by **Creation time** to find DMS Schema Conversion log group.

   Also, the name of your log group includes the Amazon Resource Name (ARN) of your migration project. You can see the ARN of your project on the **Migration projects** page in DMS Schema Conversion. Make sure that you choose **ARN** in **Preferences**.

1. Choose the name of your log group, and then choose the name of your log stream.

1. For **Actions**, choose **Export results** to save your DMS Schema Conversion log.

After you've finished your schema conversion in DMS Schema Conversion, clean up your resources.

**To clean up your DMS Schema Conversion resources**

1. Sign in to the AWS Management Console and open the AWS DMS console.

1. In the navigation pane, choose **Migration projects**.

   1. Choose **sc-project**.

   1. Choose **Schema conversion**, and then choose **Close schema conversion**.

   1. Choose **Delete** and confirm your choice.

1. In the navigation pane, choose **Instance profiles**.

   1. Choose **sc-instance**.

   1. Choose **Delete** and confirm your choice.

1. In the navigation pane, choose **Data providers**.

   1. Select **sc-source** and **sc-target**.

   1. Choose **Delete** and confirm your choice.

Also, make sure that you clean up other AWS resources that you created, such as your Amazon S3 bucket, database secrets in AWS Secrets Manager, IAM roles, and virtual private cloud (VPC).

# Setting up a network for DMS Schema Conversion
<a name="instance-profiles-network"></a>

DMS Schema Conversion creates a schema conversion instance in a virtual private cloud (VPC) based on the Amazon VPC service. When you create your instance profile, you specify the VPC to use. You can use your default VPC for your account and AWS Region, or you can create a new VPC.

You can use different network configurations to set up interaction for your source and target databases with DMS Schema Conversion. These configurations depend on the location of your source data provider and your network settings. The following topics provide descriptions of common network configurations.

**Topics**
+ [Using a single VPC for source and target data providers](#instance-profiles-network-one-vpc)
+ [Using multiple VPCs for source and target data providers](#instance-profiles-network-multiple-vpc)
+ [Using Direct Connect or a VPN to configure a network to a VPC](#instance-profiles-network-vpn)
+ [Using an internet connection to a VPC](#instance-profiles-network-internet)

## Using a single VPC for source and target data providers
<a name="instance-profiles-network-one-vpc"></a>

The simplest network configuration for DMS Schema Conversion is a single VPC configuration. Here, your source data provider, instance profile, and the target data provider are all located in the same VPC. You can use this configuration to convert your source database on an Amazon EC2 instance.

To use this configuration, make sure that the VPC security group used by the instance profile has access to the data providers. For example, you can allow either a VPC Classless Inter-Domain Routing (CIDR) range or the Elastic IP address for your Network Address Translation (NAT) gateway.

## Using multiple VPCs for source and target data providers
<a name="instance-profiles-network-multiple-vpc"></a>

If your source and target data providers are in different VPCs, you can create your instance profile in one of the VPCs. You can then link these two VPCs by using VPC peering. You can use this configuration to convert your source database on an Amazon EC2 instance.

A *VPC peering connection* is a networking connection between two VPCs that activates routing using the private IP address of each VPC\$1 as if they were in the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. For more information about VPC peering, see [VPC peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html) in the *Amazon VPC User Guide*.

To implement VPC peering, follow the instructions in [Work with VPC peering connections](https://docs.aws.amazon.com/vpc/latest/peering/working-with-vpc-peering.html) in the *Amazon VPC User Guide*. Make sure that the route table of one VPC contains the CIDR block of the other. For example, suppose that VPC A is using destination 10.0.0.0/16 and VPC B is using destination 172.31.0.0. In this case, the route table of VPC A should contain 172.31.0.0, and the route table of VPC B must contain 10.0.0.0/16. For more detailed information, see [Update your route tables for VPC peering connection](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html) in the *Amazon VPC Peering Guide*. 

## Using Direct Connect or a VPN to configure a network to a VPC
<a name="instance-profiles-network-vpn"></a>

Remote networks can connect to a VPC using several options, such as Direct Connect or a software or hardware VPN connection. You can use these options to integrate existing on-site services by extending an internal network into the AWS Cloud. You might integrate on-site services such as monitoring, authentication, security, data, or other systems. By using this type of network extension, you can seamlessly connect on-site services to resources hosted by AWS, such as a VPC. You can use this configuration to convert your source on-premises database.

In this configuration, the VPC security group must include a routing rule that sends traffic destined for a VPC CIDR range or specific IP address to a host. This host must be able to bridge traffic from the VPC into the on-premises VPN. In this case, the NAT host includes its own security group settings. These settings must allow traffic from your VPC CIDR range or security group into the NAT instance. For more information, see [Create a Site-to-Site VPN connection](https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html#vpn-create-vpn-connection) in the *AWS Site-to-Site VPN User Guide*.

## Using an internet connection to a VPC
<a name="instance-profiles-network-internet"></a>

If you don't use a VPN or Direct Connect to connect to AWS resources, you can use an internet connection. This configuration involves a private subnet in a VPC with an internet gateway. The gateway contains the target data provider and the instance profile. You can use this configuration to convert your source on-premises database.

To add an internet gateway to your VPC, see [Attaching an internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html#Add_IGW_Attach_Gateway) in the *Amazon VPC User Guide*.

The VPC route table must include routing rules that send traffic *not* destined for the VPC by default to the internet gateway. In this configuration, the connection to the data provider appears to come from the public IP address of your NAT gateway. For more information, see [VPC Route Tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) in the *Amazon VPC User Guide*.

# Creating source data providers in DMS Schema Conversion
<a name="data-providers-source"></a>

You can use a Microsoft SQL Server, Oracle, PostgreSQL, MySQL, IBM DB2 for LUW, IBM DB2 for z/OS and SAP ASE (Sybase ASE) database as a source data provider in migration projects for DMS Schema Conversion. 

Make sure that you configure the network to permit interaction between your source data provider and DMS Schema Conversion. For more information, see [Setting up a network for DMS Schema Conversion](instance-profiles-network.md). 

**Topics**
+ [Using a Microsoft SQL Server database as a source in DMS Schema Conversion](data-providers-sql-server.md)
+ [Using an Oracle database as a source in DMS Schema Conversion](data-providers-oracle.md)
+ [Using an Oracle Data Warehouse database as a source in DMS Schema Conversion](data-providers-oracle-dw.md)
+ [Using a PostgreSQL database as a source in DMS Schema Conversion](sc-data-providers-postgresql.md)
+ [Using a MySQL database as a source in DMS Schema Conversion](sc-data-providers-mysql.md)
+ [Using an IBM Db2 for Linux, UNIX and Windows database as a source in DMS Schema Conversion](sc-data-providers-db2luw.md)
+ [Using an IBM Db2 for z/OS database as a source in DMS Schema Conversion](sc-data-providers-db2zos.md)
+ [Using a SAP ASE (Sybase ASE) database as a source in AWS DMS Schema Conversion](dm-data-providers-source-sybase-ASE.md)

# Using a Microsoft SQL Server database as a source in DMS Schema Conversion
<a name="data-providers-sql-server"></a>

You can use SQL Server databases as a migration source in DMS Schema Conversion.

You can use DMS Schema Conversion to convert database code objects from SQL Server to the following targets:
+ Aurora MySQL
+ Aurora PostgreSQL
+ RDS for MySQL
+ RDS for PostgreSQL

For information about the supported SQL Server database versions, see [Source data providers for DMS Schema Conversion](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.SchemaConversion).

For more information about using DMS Schema Conversion with a source SQL Server database, see the [SQL Server to MySQL migration step-by-step walkthrough](https://docs.aws.amazon.com/dms/latest/sbs/schema-conversion-sql-server-mysql.html).

## Privileges for Microsoft SQL Server as a source
<a name="data-providers-sql-server-permissions"></a>

View the following list of privileges required for Microsoft SQL Server as a source: 
+ VIEW DEFINITION
+ VIEW DATABASE STATE

The `VIEW DEFINITION` privilege enables users that have public access to see object definitions. DMS Schema Conversion uses the `VIEW DATABASE STATE` privilege to check the features of the SQL Server Enterprise edition.

Repeat the grant for each database whose schema you are converting.

In addition, grant the following privileges on the `master` database:
+ VIEW SERVER STATE
+ VIEW ANY DEFINITION

DMS Schema Conversion uses the `VIEW SERVER STATE` privilege to collect server settings and configuration. Make sure that you grant the `VIEW ANY DEFINITION` privilege to view data providers.

To read information about Microsoft Analysis Services, run the following command on the `master` database.

```
EXEC master..sp_addsrvrolemember @loginame = N'<user_name>', @rolename = N'sysadmin'
```

In the preceding example, replace the `<user_name>` placeholder with the name of the user who you previously granted with the required privileges.

To read information about SQL Server Agent, add your user to the SQLAgentUser role. Run the following command on the `msdb` database.

```
EXEC sp_addrolemember <SQLAgentRole>, <user_name>;
```

In the preceding example, replace the `<SQLAgentRole>` placeholder with the name of the SQL Server Agent role. Then replace the `<user_name>` placeholder with the name of the user who you previously granted with the required privileges. For more information, see [Adding a user to the SQLAgentUser role](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.Agent.html#SQLServerAgent.AddUser) in the *Amazon RDS User Guide*.

To detect log shipping, grant the `SELECT on dbo.log_shipping_primary_databases` privilege on the `msdb` database.

To use the notification approach of the data definition language (DDL) replication, grant the `RECEIVE ON <schema_name>.<queue_name>` privilege on your source databases. In this example, replace the `<schema_name>` placeholder with the schema name of your database. Then, replace the `<queue_name>` placeholder with the name of a queue table.

# Using an Oracle database as a source in DMS Schema Conversion
<a name="data-providers-oracle"></a>

You can use Oracle databases as a migration source in DMS Schema Conversion.

To connect to your Oracle database, use the Oracle System ID (SID). To find the Oracle SID, submit the following query to your Oracle database:

```
SELECT sys_context('userenv','instance_name') AS SID FROM dual;
```

You can use DMS Schema Conversion to convert database code objects from Oracle Database to the following targets:
+ Aurora MySQL
+ Aurora PostgreSQL
+ RDS for MySQL
+ RDS for PostgreSQL

For information about the supported Oracle database versions, see [Source data providers for DMS Schema Conversion](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.SchemaConversion).

For more information about using DMS Schema Conversion with a source Oracle database, see the [Oracle to PostgreSQL migration step-by-step walkthrough](https://docs.aws.amazon.com/dms/latest/sbs/schema-conversion-oracle-postgresql.html).

## Privileges for Oracle as a source
<a name="data-providers-oracle-permissions"></a>

The following privileges are required for Oracle as a source: 
+ CONNECT 
+ SELECT\$1CATALOG\$1ROLE 
+ SELECT ANY DICTIONARY 
+ SELECT ON SYS.ARGUMENT\$1

# Using an Oracle Data Warehouse database as a source in DMS Schema Conversion
<a name="data-providers-oracle-dw"></a>

You can use Oracle Data Warehouse databases as a migration source in DMS Schema Conversion to convert database code objects and application code to Amazon Redshift.

For information about supported Oracle database versions, see [Source data providers for DMS Schema Conversion](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.SchemaConversion). For more information about using DMS Schema Conversion with a source Oracle database, see the [ Oracle to PostgreSQL migration step-by-step walkthrough](https://docs.aws.amazon.com/dms/latest/sbs/schema-conversion-oracle-postgresql.html). 

## Privileges for using an Oracle Data Warehouse database as a source
<a name="data-providers-oracle-dw-privileges"></a>

The following privileges are required for Oracle Data Warehouse as a source:
+ CONNECT
+ SELECT\$1CATALOG\$1ROLE
+ SELECT ANY DICTIONARY

## Oracle Data Warehouse to Amazon Redshift conversion settings
<a name="data-providers-oracle-dw-settings"></a>

For information about editing DMS Schema Conversion settings, see [Specifying schema conversion settings for migration projects](schema-conversion-settings.md). 

Oracle Data Warehouse to Amazon Redshift conversion settings include the following:
+ **Add comments in the converted code for the action items of selected severity and higher**: This setting limits the number of comments with action items in the converted code. DMS adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ **The maximum number of tables for the target Amazon Redshift cluster**: This setting sets the maximum number of tables that DMS can apply to your target Amazon Redshift cluster. Amazon Redshift has quotas that limit the use tables for different cluster node types. This setting supports the following values:
  + **Auto**: DMS determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type.
  + **Set a value**: Set the number of tables manually.

  DMS converts all your source tables, even if the number of tables is more than your Amazon Redshift cluster can store. DMS stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, DMS displays a warning message. Also, DMS applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.

  For information about Amazon Redshift table quotas, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html).
+ **Use the UNION ALL view**: This setting lets you set the maximum number of target tables that DMS can create for a single source table.

  Amazon Redshift doesn't support table partitioning. To emulate table partitioning and make queries run faster, DMS can migrate each partition of your source table to a separate table in Amazon Redshift. Then, DMS creates a view that includes data from all of the target tables it creates.

  DMS automatically determines the number of partitions in your source table. Depending on the type of source table partitioning, this number can exceed the quota for the tables that you can apply to your Amazon Redshift cluster. To avoid reaching this quota, enter the maximum number of target tables that DMS can create for partitions of a single source table. The default option is 368 tables, which represents a partition for 366 days of a year, plus two tables for NO RANGE and UNKNOWN partitions.
+ **Datetype format elements that you use in the Oracle code are similar to datetime format strings in Amazon Redshift**: Use this setting to convert data type formatting functions such as `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` with datetime format elements that Amazon Redshift doesn't support. By default, DMS uses extension pack functions to emulate these unsupported format elements in the converted code.

  The datetime format model in Oracle includes more elements than the datetime format strings in Amazon Redshift. When your source code includes only datetime format elements that Amazon Redshift supports, set this value to avoid extension pack functions in the converted code. Avoiding the extension functions makes the converted code run faster.
+ **Numeric format elements that you use in the Oracle code are similar to numeric format strings in Amazon Redshift**: Use this setting to convert numeric data type formatting functions that Amazon Redshift doesn't support. By default, DMS uses extension pack functions to emulate these unsupported format elements in the converted code. 

  The numeric format model in Oracle includes more elements than the numeric format strings in Amazon Redshift. When your source code includes only numeric format elements that Amazon Redshift supports, set this value to avoid extension pack functions in the converted code. Avoiding the extension functions makes the converted code run faster.
+ **Use the NVL function to emulate the behavior of Oracle LEAD and LAG functions**: If your source code doesn't use the default values for offset in the `LEAD` and `LAG` functions, DMS can emulate these functions with the `NVL` function. By default, DMS raises an action item for each use of the `LEAD` and `LAG` functions. Emulating these functions using `NVL` makes the converted code run faster.
+ **Emulate the behavior of primary and unique keys**: Set this setting to cause DMS to emulate the behavior of primary and unique key constraints on the target Amazon Redshift cluster. Amazon Redshift doesn't enforce primary and unique key constraints, and uses them for informational purposes only. If your source code uses primary or unique key constraints, set this setting to ensure that DMS emulates their behavior.
+ **Use compression encoding**: Set this setting to apply compression encoding to Amazon Redshift table columns. DMS assigns compression encoding automatically using the default Redshift algorithm. For information about compression encoding, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  Amazon Redshift doesn't apply compression by default to columns that are defined as sort and distribution keys. To apply compression to these columns, set **Use compression encoding for KEY columns**. You can only select this option when you set **Use compression encoding**.

# Using a PostgreSQL database as a source in DMS Schema Conversion
<a name="sc-data-providers-postgresql"></a>

You can use PostgreSQL databases as a migration source in DMS Schema Conversion. 

You can use DMS Schema Conversion to convert database code objects from PostgreSQL database to the following targets:
+ MySQL
+ Aurora MySQL

The privileges required for PostgreSQL as a source are as follows: 
+ CONNECT ON DATABASE <database\$1name>
+ USAGE ON SCHEMA <database\$1name>
+ SELECT ON ALL TABLES IN SCHEMA <database\$1name>
+ SELECT ON ALL SEQUENCES IN SCHEMA <database\$1name>

# Using a MySQL database as a source in DMS Schema Conversion
<a name="sc-data-providers-mysql"></a>

You can use MySQL databases as a migration source in DMS Schema Conversion.

You can use DMS Schema Conversion to convert database code objects from MySQL Database to the following targets: 
+ PostgreSQL
+ Aurora PostgreSQL

The privileges required for MySQL as a source are as follows:
+ `SELECT ON *.*`
+ `SHOW VIEW ON *.*`

## MySQL to PostgreSQL conversion settings
<a name="sc-data-providers-mysql-settings"></a>

For information about editing DMS Schema Conversion settings, see [Specifying schema conversion settings for migration projects](schema-conversion-settings.md). 

MySQL to PostgreSQL conversion settings include the following:
+ **Comments in converted SQL code**: Set this setting to add comments in the converted code for the action items of the selected severity and higher. 

  Valid values:
  + **Errors only**
  + **Errors and warnings**
  + **All messages**

# Using an IBM Db2 for Linux, UNIX and Windows database as a source in DMS Schema Conversion
<a name="sc-data-providers-db2luw"></a>

You can use an IBM Db2 for Linux, UNIX and Windows databases as a migration source in DMS Schema Conversion.

You can use DMS Schema Conversion to convert database code objects from Db2 for for Linux, UNIX and Windows Database to the following targets: 
+ Aurora PostgreSQL
+ Amazon RDS for PostgreSQL

For information about the supported IBM Db2 for Linux, UNIX and Windows databases versions, see [Source data providers for DMS Schema Conversion](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.SchemaConversion)

## Prerequisites for IBM Db2 for Linux, UNIX and Windows as a source database
<a name="sc-data-providers-db2luw-prereq"></a>

You can use the following code example to check the version of your source IBM Db2 for Linux, UNIX and Windows database: 

```
SELECT SERVICE_LEVEL FROM SYSIBMADM.ENV_INST_INFO;
```

## Privileges for IBM Db2 for Linux, UNIX and Windows as a source database
<a name="sc-data-providers-db2luw-privileges"></a>

The privileges needed to connect to a Db2 for Linux, UNIX and Windows database and read system catalogs and tables are as follows:

```
EXECUTE ON PACKAGE NULLID.SYSSH200
EXECUTE ON PACKAGE NULLID.SQLC2O26 /* If Db2 LUW runs on a Linux operating system */
EXECUTE ON FUNCTION SYSPROC.ENV_GET_INST_INFO
EXECUTE ON FUNCTION SYSPROC.AUTH_LIST_AUTHORITIES_FOR_AUTHID
EXECUTE ON FUNCTION SYSPROC.AUTH_LIST_GROUPS_FOR_AUTHID
EXECUTE ON FUNCTION SYSPROC.AUTH_LIST_ROLES_FOR_AUTHID
SELECT ON SYSIBMADM.ENV_INST_INFO
SELECT ON SYSIBMADM.ENV_SYS_INFO
SELECT ON SYSIBMADM.PRIVILEGES
SELECT ON SYSCAT.ATTRIBUTES
SELECT ON SYSCAT.CHECKS
SELECT ON SYSCAT.COLIDENTATTRIBUTES
SELECT ON SYSCAT.COLUMNS
SELECT ON SYSCAT.DATAPARTITIONEXPRESSION
SELECT ON SYSCAT.DATAPARTITIONS
SELECT ON SYSCAT.DATATYPEDEP
SELECT ON SYSCAT.DATATYPES
SELECT ON SYSCAT.HIERARCHIES
SELECT ON SYSCAT.INDEXCOLUSE
SELECT ON SYSCAT.INDEXES
SELECT ON SYSCAT.INDEXPARTITIONS
SELECT ON SYSCAT.KEYCOLUSE
SELECT ON SYSCAT.MODULEOBJECTS
SELECT ON SYSCAT.MODULES
SELECT ON SYSCAT.NICKNAMES
SELECT ON SYSCAT.PERIODS
SELECT ON SYSCAT.REFERENCES
SELECT ON SYSCAT.ROUTINEPARMS
SELECT ON SYSCAT.ROUTINES
SELECT ON SYSCAT.ROWFIELDS
SELECT ON SYSCAT.SCHEMATA
SELECT ON SYSCAT.SEQUENCES
SELECT ON SYSCAT.TABCONST
SELECT ON SYSCAT.TABLES
SELECT ON SYSCAT.TRIGGERS
SELECT ON SYSCAT.VARIABLEDEP
SELECT ON SYSCAT.VARIABLES
SELECT ON SYSCAT.VIEWS
SELECT ON SYSIBM.SYSDUMMY1
```

# Using an IBM Db2 for z/OS database as a source in DMS Schema Conversion
<a name="sc-data-providers-db2zos"></a>

You can use an IBM Db2 for z/OS databases as a migration source in DMS Schema Conversion.

You can use DMS Schema Conversion to convert database code objects from Db2 for z/OS Database to the following targets: 
+ Amazon RDS for Db2
+ Amazon RDS for PostgreSQL
+ Aurora PostgreSQL

For more information regarding the supported IBM Db2 for z/OS database versions, see [Source data providers for DMS Schema Conversion](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Sources.html#CHAP_Introduction.Sources.SchemaConversion).

## Prerequisites for IBM Db2 for z/OS as a source database
<a name="sc-data-providers-db2zos-prereq"></a>

The IBM Db2 for z/OS version 12 function level 100 database version does not support most new capabilities of IBM Db2 for z/OS version 12. This database version provides support for fallback to Db2 version 11 and data sharing with Db2 version 11. To avoid the conversion of unsupported features of Db2 version 11, we recommend that you use an IBM Db2 for z/OS database function level 500 or higher as a source for AWS DMS SC.

You can use the following code example to check the version of your source IBM Db2 for z/OS database:

```
SELECT GETVARIABLE('SYSIBM.VERSION') as version FROM SYSIBM.SYSDUMMY1;
```

Ensure that this code returns version `DSN12015` or higher.

You can use the following code example to check the value of the `APPLICATION COMPATIBILITY` special register in your source IBM Db2 for z/OS database:

```
SELECT CURRENT APPLICATION COMPATIBILITY as version FROM SYSIBM.SYSDUMMY1;
```

Ensure that this code returns version `V12R1M500` or higher.

## Privileges for IBM Db2 for z/OS as a source database
<a name="sc-data-providers-db2zos-privileges"></a>

The privileges needed to connect to a Db2 for z/OS database and read system catalogs and tables are as follows:

```
SELECT ON SYSIBM.LOCATIONS
SELECT ON SYSIBM.SYSCHECKS
SELECT ON SYSIBM.SYSCOLUMNS
SELECT ON SYSIBM.SYSDATABASE
SELECT ON SYSIBM.SYSDATATYPES
SELECT ON SYSIBM.SYSDUMMY1
SELECT ON SYSIBM.SYSFOREIGNKEYS
SELECT ON SYSIBM.SYSINDEXES
SELECT ON SYSIBM.SYSKEYCOLUSE
SELECT ON SYSIBM.SYSKEYS
SELECT ON SYSIBM.SYSKEYTARGETS
SELECT ON SYSIBM.SYSJAROBJECTS
SELECT ON SYSIBM.SYSPACKAGE
SELECT ON SYSIBM.SYSPARMS
SELECT ON SYSIBM.SYSRELS
SELECT ON SYSIBM.SYSROUTINES
SELECT ON SYSIBM.SYSSEQUENCES
SELECT ON SYSIBM.SYSSEQUENCESDEP
SELECT ON SYSIBM.SYSSYNONYMS
SELECT ON SYSIBM.SYSTABCONST
SELECT ON SYSIBM.SYSTABLES
SELECT ON SYSIBM.SYSTABLESPACE
SELECT ON SYSIBM.SYSTRIGGERS
SELECT ON SYSIBM.SYSVARIABLES
SELECT ON SYSIBM.SYSVIEWS
```

# Using a SAP ASE (Sybase ASE) database as a source in AWS DMS Schema Conversion
<a name="dm-data-providers-source-sybase-ASE"></a>

You can use SAP ASE (Sybase ASE) databases as a migration source in DMS Schema Conversion.

You can use DMS Schema Conversion to convert database code objects from SAP ASE (Sybase ASE) Database to the following targets:
+ Aurora PostgreSQL
+ RDS for PostgreSQL

For information about the supported SAP ASE (Sybase ASE) database versions, see [Source data providers for DMS Schema Conversion](CHAP_Introduction.Sources.md#CHAP_Introduction.Sources.SchemaConversion)

## Privileges for SAP ASE (Sybase ASE) as a source database
<a name="dm-data-providers-source-sybase-ASE.privileges"></a>

The following privileges are required when using SAP ASE (Sybase ASE) as a source database:
+ USE master
+ select on dbo.spt\$1values
+ select on asehostname

For each database to be migrated, grant the following privileges:
+ USE db\$1name *(Replace db\$1name with the name of the database being migrated)*
+ select on dbo.sysusers
+ select on dbo.sysobjects
+ select on dbo.sysindexes
+ select on dbo.syscolumns
+ select on dbo.sysreferences
+ select on dbo.syscomments
+ select on dbo.syspartitions
+ select on dbo.syspartitionkeys
+ select on dbo.sysconstraints
+ select on dbo.systypes
+ select on dbo.sysqueryplans

# Creating and setting target data providers in DMS Schema Conversion
<a name="data-providers-target"></a>

You can use MySQL, PostgreSQL, Amazon Redshift and Amazon RDS for Db2 databases as a target data provider in migration projects for DMS Schema Conversion.

**Topics**
+ [Using a MySQL database as a target in DMS Schema Conversion](data-providers-mysql.md)
+ [Using a PostgreSQL database as a target in DMS Schema Conversion](data-providers-postgresql.md)
+ [Using an Amazon Redshift cluster as a target in DMS Schema Conversion](data-providers-redshift.md)
+ [Using Amazon RDS for Db2 database as a target in DMS Schema Conversion](sc-data-providers-rds.md)

# Using a MySQL database as a target in DMS Schema Conversion
<a name="data-providers-mysql"></a>

You can use MySQL databases as a migration target in DMS Schema Conversion.

For information about supported target databases, see [Target data providers for DMS Schema Conversion](CHAP_Introduction.Targets.md#CHAP_Introduction.Targets.SchemaConversion).

## Privileges for MySQL as a target
<a name="data-providers-mysql-permissions"></a>

The following privileges are required for MySQL as a target:
+ `CREATE ON *.*`
+ `ALTER ON *.*`
+ `DROP ON *.*`
+ `INDEX ON *.*`
+ `REFERENCES ON *.*`
+ `SELECT ON *.*`
+ `CREATE VIEW ON *.*`
+ `SHOW VIEW ON *.*`
+ `TRIGGER ON *.*`
+ `CREATE ROUTINE ON *.*`
+ `ALTER ROUTINE ON *.*`
+ `EXECUTE ON *.*`
+ `CREATE TEMPORARY TABLES ON *.*`
+ `AWS_LAMBDA_ACCESS (Only valid for Aurora MySQL)`
+ `INSERT, UPDATE ON AWS_ORACLE_EXT.*`
+ `INSERT, UPDATE, DELETE ON AWS_ORACLE_EXT_DATA.*`
+ `INSERT, UPDATE ON AWS_SQLSERVER_EXT.*`
+ `INSERT, UPDATE, DELETE ON AWS_SQLSERVER_EXT_DATA.*`
+ `CREATE TEMPORARY TABLES ON AWS_SQLSERVER_EXT_DATA.*`

You can use the following code example to create a database user and grant the privileges.

```
CREATE USER 'user_name' IDENTIFIED BY 'your_password';
GRANT CREATE ON *.* TO 'user_name';
GRANT ALTER ON *.* TO 'user_name';
GRANT DROP ON *.* TO 'user_name';
GRANT INDEX ON *.* TO 'user_name';
GRANT REFERENCES ON *.* TO 'user_name';
GRANT SELECT ON *.* TO 'user_name';
GRANT CREATE VIEW ON *.* TO 'user_name';
GRANT SHOW VIEW ON *.* TO 'user_name';
GRANT TRIGGER ON *.* TO 'user_name';
GRANT CREATE ROUTINE ON *.* TO 'user_name';
GRANT ALTER ROUTINE ON *.* TO 'user_name';
GRANT EXECUTE ON *.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON *.* TO 'user_name';
GRANT AWS_LAMBDA_ACCESS TO 'user_name'; (Only valid for Aurora MySQL)
GRANT INSERT, UPDATE ON AWS_ORACLE_EXT.* TO 'user_name';
GRANT INSERT, UPDATE, DELETE ON AWS_ORACLE_EXT_DATA.* TO 'user_name';
GRANT INSERT, UPDATE ON AWS_SQLSERVER_EXT.* TO 'user_name';
GRANT INSERT, UPDATE, DELETE ON AWS_SQLSERVER_EXT_DATA.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON AWS_SQLSERVER_EXT_DATA.* TO 'user_name';
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

To use Amazon RDS for MySQL or Aurora MySQL as a target, set the `lower_case_table_names` parameter to `1`. This value means that the MySQL server handles identifiers of such object names as tables, indexes, triggers, and databases as case insensitive. If you have turned on binary logging in your target instance, then set the `log_bin_trust_function_creators` parameter to `1`. In this case, you don't need to use the `DETERMINISTIC`, `READS SQL DATA` or `NO SQL` characteristics to create stored functions. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

# Using a PostgreSQL database as a target in DMS Schema Conversion
<a name="data-providers-postgresql"></a>

You can use PostgreSQL databases as a migration target in DMS Schema Conversion.

For information about supported target databases, see [Target data providers for DMS Schema Conversion](CHAP_Introduction.Targets.md#CHAP_Introduction.Targets.SchemaConversion).

## Privileges for PostgreSQL as a target
<a name="data-providers-postgresql-permissions"></a>

To use PostgreSQL as a target, DMS Schema Conversion requires the following minimum privileges. The target database must exist before you connect. For each database you want to use in your migration project, grant the following privileges to the user account used for the migration:

```
GRANT CONNECT ON DATABASE <db_name> TO <user>;
GRANT CREATE ON DATABASE <db_name> TO <user>;
```

To use Amazon RDS for PostgreSQL as a target, DMS Schema Conversion requires the `rds_superuser` role.

To use the converted public synonyms, change the database default search path using the following command.

```
ALTER DATABASE <db_name> SET SEARCH_PATH = "$user", public_synonyms, public;
```

In this example, replace the `<db_name>` placeholder with the name of your database.

In PostgreSQL, only the schema owner or a `superuser` can drop a schema. The owner can drop a schema and all objects that this schema includes, even if the owner of the schema doesn't own some of its objects.

When you use different users to convert and apply different schemas to your target database, you may encounter an error message when DMS Schema Conversion can't drop a schema. To avoid this error message, use the `superuser` role.

# Using an Amazon Redshift cluster as a target in DMS Schema Conversion
<a name="data-providers-redshift"></a>

You can use Amazon Redshift databases as a migration target in DMS Schema Conversion. For information about supported target databases, see [Target data providers for DMS Schema Conversion](CHAP_Introduction.Targets.md#CHAP_Introduction.Targets.SchemaConversion). 

## Privileges for Amazon Redshift as a target
<a name="data-providers-redshift-privileges"></a>

Using Amazon Redshift as a target for DMS Schema Conversion requires the following privileges:
+ **CREATE ON DATABASE**: Allows DMS to create new schemas in the database.
+ **CREATE ON SCHEMA**: Allows DMS to create objects in the database schema.
+ **GRANT USAGE ON LANGUAGE**: Allows DMS to create new functions and procedures in the database.
+ **GRANT SELECT ON ALL TABLES IN SCHEMA pg\$1catalog**: Provides the user system information about the Amazon Redshift cluster.
+ **GRANT SELECT ON pg\$1class\$1info**: Provides the user information about the table distribution style.

You can use the following code example to create a database user and grant it permissions. Replace the example values with your values.

```
CREATE USER user_name PASSWORD your_password;
GRANT CREATE ON DATABASE db_name TO user_name;
GRANT CREATE ON SCHEMA schema_name TO user_name;
GRANT USAGE ON LANGUAGE plpythonu TO user_name;
GRANT USAGE ON LANGUAGE plpgsql TO user_name;
GRANT SELECT ON ALL TABLES IN SCHEMA pg_catalog TO user_name;
GRANT SELECT ON pg_class_info TO user_name;
GRANT SELECT ON sys_serverless_usage TO user_name;
GRANT SELECT ON pg_database_info TO user_name;
GRANT SELECT ON pg_statistic TO user_name;
```

Repeat the `GRANT CREATE ON SCHEMA` operation for each target schema where you will apply the converted code or migrate data.

You can apply an extension pack on your target Amazon Redshift database. An extension pack is an add-on module that emulates source database functions that are required when converting objects to Amazon Redshift. For more information, see [Using extension packs in DMS Schema Conversion](extension-pack.md).

# Using Amazon RDS for Db2 database as a target in DMS Schema Conversion
<a name="sc-data-providers-rds"></a>

You can use Amazon RDS for Db2 databases as a migration target in DMS Schema Conversion.

For more information regarding supported target databases, see [Target data providers for DMS Schema Conversion](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Targets.html#CHAP_Introduction.Targets.SchemaConversion).

## Privileges for Amazon RDS for Db2 as a target
<a name="sc-data-providers-rds-privileges"></a>

To use Amazon RDS for Db2 as a target, DMS Schema Conversion requires the `master_user_role` role. For more information, see [Amazon RDS for Db2 default roles](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/db2-default-roles.html) in the *Amazon Relational Database Service User Guide*.

# Virtual data provider
<a name="virtual-data-provider"></a>

AWS Database Migration Service (DMS) offers Virtual Mode for data providers in schema conversion. This feature allows you to perform schema conversion without connecting to a target database, reducing infrastructure costs and providing flexibility for migration planning. With Virtual Mode, you can start conversion work immediately to plan optimal migration strategy before committing resources. You can evaluate compatibility, convert and review schema code, and even test different target options. Then you can connect to a database when you are ready. Virtual Mode supports all target databases compatible with AWS DMS Schema Conversion, including MySQL, PostgreSQL, Amazon Redshift, and Amazon RDS for Db2.

## Create virtual data provider
<a name="create-virtual-data-provider"></a>

To create a virtual data provider, simply enable Virtual Mode in the form for creating the data provider. For more information, see [Configure your data providers for DMS Schema Conversion](https://docs.aws.amazon.com/dms/latest/userguide/getting-started-data-providers.html).

## Virtual data provider usage
<a name="virtual-data-provider-usage"></a>

To use a virtual data provider, create a new migration project or modify an existing migration project. Then set the project's target data provider to the virtual data provider you've created.

To use a virtual provider for the Secret and IAM role for reading that secret's fields, use any secret that is granted to the IAM role to successfully setup the migration project.

After target provider is set, choose the **Schema conversion** tab. Then choose the **Launch schema conversion** button. Wait until the project is started.

You can now use schema conversion with the virtual target data provider just as you would with a real target data provider. Actions that require connection to a real target database will be disabled, but **Save as SQL** will be available for the target tree.

## Transition from a virtual data provider to a real data provider
<a name="transition-virtual-to-real-data-provider"></a>

When you are ready to proceed with the actual migration, you can make the transition to a real data provider .

**To transition from a virtual data provider to a real data provider, follow these steps.**

1. Sign in to the AWS Management Console, and open the AWS DMS console.

1. In the navigation pane, choose **Data providers**. Then choose the data provider you wish to modify.

1. Go to the section **Associated migration projects** , to see all the migration projects using this virtual data provider.

1. Choose the first associated project.

1. Choose the **Schema conversion** tab.

1. Check the option **Details section** - **Status**.

1. If the status is **Open** and the button **Close schema conversion** is not greyed out, choose it and wait until the project is closed.

1. Repeat the previous steps for all remaining associated migration projects.

1. Confirm that there are no remaining open projects.

1. Return to the data provider and choose **Modify**.

1. Turn **Virtual Mode** off.

1. Fill the connection settings correctly. The connection parameters depend on your database engine. For more information, see [Creating data providers](https://docs.aws.amazon.com/dms/latest/userguide/data-providers-create.html).

1. Choose **Save changes**.
**Note**  
After changes have been saved, it won't be possible to turn Virtual Mode back to on.

1. Return to the AWS DMS console.

1. In the navigation pane, choose **Migration Projects**. Then choose the migration project you want to change the data provider for.

1. Choose **Modify**.

1. Fill the Secret to be used to connect to target data provider. Use the correct secret, containing the credentials needed to connect to the database.

1. Fill the IAM role to use to read the target secret. Check that the IAM role specified here is correct, is granted read on the connection credentials secret, and is available to AWS DMS Schema Conversion service.

1. Choose **Save changes**.

1. Choose **Schema conversion** tab.

1. Choose **Launch schema conversion** button.

1. Wait until the project is started.

**Important**  
Once you disable Virtual Mode and save this change the data provider, this action cannot be undone. You will be required to provide actual database connection details for the data provider, and working Secret and IAM role to access the Secret in the associated migration project.

On the very first start of the project containing the transited data provider, you will have conversion results. Only these actions will be available:
+ **On node schemas** - Refresh from database.
+ **On database objects** - Apply changes, then save as SQL.

The **Apply** action will apply converted objects to the real database.

The **Refresh from database** action loads database objects from the real database. Any unsaved conversion objects will be lost.

# Managing migration projects in DMS Schema Conversion
<a name="sc-migration-projects"></a>

After you create an instance profile and compatible data providers for schema conversion, create a migration project. For more information, see [ Creating migration projects](migration-projects-create.md).

To use this new project in DMS Schema Conversion, on the **Migration projects** page, choose your project from the list. Next, on the **Schema conversion** tab, choose **Launch schema conversion**.

The first launch of DMS Schema Conversion requires some setup. AWS Database Migration Service (AWS DMS) starts a schema conversion instance, which takes up to five minutes. This process also reads the metadata from the source and target databases. After a successful first launch, you can access DMS Schema Conversion faster.

Amazon terminates the schema conversion instance that your migration project uses in three days after you complete the project. You can retrieve your converted schema and assessment report from the Amazon S3 bucket that you use for DMS Schema Conversion.

## Specifying migration project settings for DMS Schema Conversion
<a name="migration-projects-settings"></a>

After you create your migration project and launch schema conversion, you can specify migration project settings. You can change conversion settings to improve the performance of converted code. Also, you can customize your schema conversion view.

Conversion settings depend on your source and target database platforms. For more information, see [Creating source data providers](data-providers-source.md) and [ Creating and setting target data providers](data-providers-target.md).

To specify what schemas and databases you want to see in the source and target database panes, use the tree view settings. You can hide empty schemas, empty databases, system databases, and user-defined databases or schemas.

**To hide databases and schemas in tree view**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project, and on the **Schema conversion** tab choose **Launch schema conversion**.

1. Choose **Settings**. The **Settings** page opens.

1. In the **Tree view** section, do the following:
   + Choose **Hide empty schemas** to hide empty schemas.
   + Choose **Hide empty databases** to hide empty databases.
   + For **System databases or schemas**, choose system databases and schemas by name to hide them.
   + For **User-defined databases or schemas**, enter the names of user-defined databases and schemas that you want to hide. Choose **Add**. The names are case-insensitive.

     To add multiple databases or schemas, use a comma to separate their names. To add multiple objects with a similar name, use the percent (%) as a wildcard. This wildcard replaces any number of any symbols in the database or schema name.

   Repeat these steps for the **Source** and **Target** sections.

1. Choose **Apply**, and then choose **Schema conversion**.

## Access logs for AWS DMS Schema Conversion
<a name="migration-projects-logs"></a>

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/](https://console.aws.amazon.com/dms/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project, and on the **Overview** tab copy migration project id from the **ARN** field.  
![\[This an image showing how to get the ARN ID from the AWS DMS console.\]](http://docs.aws.amazon.com/dms/latest/userguide/images/dms-schema-conversion-log.png)

1. Open **CloudWatch** service.

1. Choose **Log groups** and enter `dms-tasks-sct-{migration_project_id}` where `{migration_project_id}` is the `id` from Step 3.

1. Inside the **Log group** you can find **Log stream** with logs.

# Creating database migration assessment reports with DMS Schema Conversion
<a name="assessment-reports"></a>

An important part of DMS Schema Conversion is the report that it generates to help you convert your schema. This *database migration assessment report* summarizes all of the schema conversion tasks. It also details the action items for schema that can't be converted to the DB engine of your target DB instance. You can view the report in the AWS DMS console or save a copy of this report as a PDF or comma-separated value (CSV) files.

The migration assessment report includes the following:
+ An executive summary
+ Recommendations, including conversion of server objects, backup suggestions, and linked server changes

When you have items that DMS Schema Conversion can't convert automatically, the report provides estimates showing how much effort is required to write the equivalent code for your target DB instance.

**Topics**
+ [Creating a database migration assessment report for DMS Schema Conversion](assessment-reports.create.md)
+ [Viewing your database migration assessment report for DMS Schema Conversion](assessment-reports-view.md)
+ [Saving your database migration assessment report for DMS Schema Conversion](assessment-reports-save.md)

# Creating a database migration assessment report for DMS Schema Conversion
<a name="assessment-reports.create"></a>

After you create a migration project, use the following procedure to create a database migration assessment report.

**To create a database migration assessment report**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project, and then choose **Schema conversion**.

1. Choose **Launch schema conversion**. The **Schema conversion** page opens.

1. In the source database pane, choose the database schema or schema items that you want to assess. To include multiple objects in the report, make sure that you select all items. 

1. After you select the check boxes for all schema objects that you want to assess, you must choose the parent node for the selected objects. The **Actions** menu in the source database pane is now available. 

1. Choose **Assess** in the **Actions** menu. A confirmation dialog box appears.

1. Choose **Assess** in the dialog box to confirm your choice.

# Viewing your database migration assessment report for DMS Schema Conversion
<a name="assessment-reports-view"></a>

After you create an assessment report, DMS Schema Conversion adds information in the following tabs:
+ **Summary**
+ **Action items**

The **Summary** tab shows the number of items that DMS Schema Conversion can automatically convert.

The **Action items** tab shows items that DMS Schema Conversion can't convert automatically, and provides recommendations about how to manage these items.

## Assessment report summary
<a name="assessment-reports-view-summary"></a>

The **Summary** tab displays the summary information from the database migration assessment report. It shows the number of items that DMS Schema Conversion can automatically convert for database storage objects and database code objects.

In most cases, DMS Schema Conversion can't automatically convert all schema items to the target database engine. The **Summary** tab provides an estimate of the required effort to create schema items in your target DB instance that are equivalent to those in your source.

To see the conversion summary for database storage objects such as tables, sequences, constraints, data types, and so on, choose **Database storage objects**. 

To see the conversion summary for database code objects such as procedures, functions, views, triggers, and so on, choose **Database code objects**.

To change the scope of the assessment report, select the required node in the source database tree. DMS Schema Conversion updates the assessment report summary to match the selected scope.

## Assessment report action items
<a name="assessment-reports-view-action-items"></a>

The **Action items** tab contains a list of items that DMS Schema Conversion can't automatically convert to a format compatible with the target database engine. For each action item, DMS Schema Conversion provides the description of the issue and the recommended action. DMS Schema Conversion groups similar action items and displays the number of occurrences.

To view the code for the related database object, select an action item in the list.

## Assessment report with generative AI
<a name="assessment-reports-view-generative-ai"></a>

DMS Schema Conversion with generative AI streamlines the database migration process by offering recommendations to help you convert previously unconverted code objects that typically require complex manual conversion. Currently, this feature is available for Oracle to PostgreSQL/Aurora PostgreSQL and SQL Server to PostgreSQL/Aurora PostgreSQL conversions.

The assessment report which database code objects are candidates for conversion using generative AI. You can export this report to a PDF for a short list of five objects per action item. You can export the report to a CSV file to see the full list of candidates.

When you enable **Convert schema using generative AI** and perform a conversion, the updated assessment report identifies the objects that converted successfully using AI. These objects are clearly marked with a corresponding action item, making it easy to identify and track the AI-generated conversion recommendations.

To quickly locate objects that are converted by generative AI:

1. Navigate to the **Action Items** tab. 

1. In the top-right corner, locate the drop-down menu.

1. Select **Generated by AI** from the options.

This filtering method allows you to efficiently view all database objects that were successfully converted using this feature.

# Saving your database migration assessment report for DMS Schema Conversion
<a name="assessment-reports-save"></a>

After you create a database migration assessment report, you can save a copy of this report as a PDF or comma-separated value (CSV) files.

**To save a database migration assessment report as a PDF file**

1. Choose **Export**, then choose **PDF**. Review the dialog box, and choose **Export to PDF**. 

1. DMS Schema Conversion creates an archive with your PDF file and stores this archive in your Amazon S3 bucket. To change the Amazon S3 bucket, edit the schema conversion settings in your instance profile.

1. Open the assessment report file in your Amazon S3 bucket.

**To save a database migration assessment report as CSV files**

1. Choose **Export**, then choose **CSV**. Review the dialog box, and choose **Export to CSV**. 

1. DMS Schema Conversion creates an archive with CSV files and stores this archive in your Amazon S3 bucket. To change the Amazon S3 bucket, edit the schema conversion settings in your instance profile.

1. Open the assessment report files in your Amazon S3 bucket.

The PDF file contains both the summary and action item information.

When you export your assessment report to CSV, DMS Schema Conversion creates three CSV files. 

 The first CSV file contains the following information about action items:
+ Category
+ Occurrence
+ Action item
+ Subject
+ Group
+ Description
+ Documentation references
+ Recommended action
+ Line
+ Position
+ Source
+ Target
+ Server IP address and port
+ Database
+ Schema

 The second CSV file includes the `Action_Items_Summary` suffix in its name and contains the following information:
+ Schema
+ Action item
+ Number of occurrences
+ Learning curve efforts, which is the amount of effort required to design an approach to converting each action item
+ Efforts to convert an occurrence of the action item, which shows the effort required to convert each action item, following the designed approach
+ Action item description
+ Recommended action

The values that indicate the level of required efforts are based on a weighted scale, ranging from low (least) to high (most).

 The third CSV file includes `Summary` in its name and contains the following information:
+ Category
+ Number of objects
+ Objects automatically converted
+ Objects with simple actions
+ Objects with medium-complexity actions
+ Objects with complex actions
+ Total lines of code

# Using DMS Schema Conversion
<a name="schema-conversion"></a>

DMS Schema Conversion converts your existing database schemas and a majority of the database code objects to a format compatible with the target database.

DMS Schema Conversion automates much of the process of converting your online transaction processing (OLTP) database schemas to Amazon RDS for MySQL or RDS for PostgreSQL. The source and target database engines contain many different features and capabilities, and DMS Schema Conversion attempts to create an equivalent schema wherever possible. For database objects where direct conversion isn't possible, DMS Schema Conversion provides a list of actions for you to take.

To convert your database schema, use the following process:
+ Before you convert your database schemas, set up transformation rules that change the names of your database objects during conversion.
+ Create a database migration assessment report to estimate the complexity of the migration. This report provides details about the schema elements that DMS Schema Conversion can't convert automatically.
+ Convert your source database storage and code objects. DMS Schema Conversion creates a local version of the converted database objects. You can access these converted objects in your migration project. 
+ Save the converted code to SQL files to review, edit, or address conversion action items. Optionally, apply the converted code directly to your target database. 

**Topics**
+ [Setting up transformation rules in DMS Schema Conversion](schema-conversion-transformation-rules.md)
+ [Converting database schemas in DMS Schema Conversion: step-by-step guide](schema-conversion-convert.md)
+ [Converting database objects with generative AI](schema-conversion-convert.databaseobjects.md)
+ [Specifying schema conversion settings for migration projects](schema-conversion-settings.md)
+ [Refreshing your database schemas in DMS Schema Conversion](schema-conversion-refresh.md)
+ [Saving and applying your converted code in DMS Schema Conversion](schema-conversion-save-apply.md)
+ [Converting embedded SQL in Java applications](schema-conversion-embedded-sql.md)

# Setting up transformation rules in DMS Schema Conversion
<a name="schema-conversion-transformation-rules"></a>

Before you convert your database schema with DMS Schema Conversion, you can set up transformation rules. *Transformation rules* can do such things as change an object name to lowercase or uppercase, add or remove a prefix or suffix, and rename objects. For example, suppose that you have a set of tables in your source schema named `test_TABLE_NAME`. You can set up a rule that changes the prefix `test_` to the prefix `demo_` in the target schema.

You can create transformation rules that perform the following tasks:
+ Add, remove, or replace a prefix
+ Add, remove, or replace a suffix
+ Change the data type of a column
+ Change the object name to lowercase or uppercase
+ Rename objects

You can create transformation rules for the following objects:
+ Schema 
+ Table 
+ Column 

## Creating transformation rules
<a name="schema-conversion-transformation-rules-create"></a>

DMS Schema Conversion stores transformation rules as part of your migration project. You can set up transformation rules when you create your migration project, or edit them later. 

You can add multiple transformation rules in your project. DMS Schema Conversion applies transformation rules during conversion in the same order as you added them.

**To create transformation rules**

1. On the **Create migration project** page, choose **Add transformation rule**. For more information, see [ Creating migration projects](migration-projects-create.md).

1. For **Rule target**, choose the type of database objects to which this rule applies. 

1. For **Source schema**, choose **Enter a schema**. Then, enter the names of your source schemas, tables, and columns to which this rule applies. You can enter an exact name to select one object, or you can enter a pattern to select multiple objects. Use the percent (%) as a wildcard to replace any number of any symbols in the database object name. 

1. For **Action**, choose the task to perform. 

1. Depending on the rule type, enter one or two additional values. For example, to rename an object, enter the new name of the object. To replace a prefix, enter the old prefix and the new prefix.

1. Choose **Add transformation rule** to add another transformation rule.

   After you are done adding rules, choose **Create migration project**.

To duplicate an existing transformation rule, choose **Duplicate**. To edit an existing transformation rule, choose the rule from the list. To delete an existing transformation rule, choose **Remove**. 

## Editing transformation rules
<a name="schema-conversion-transformation-rules-edit"></a>

You can add new, remove, or edit existing transformation rules in your migration project. Because DMS Schema Conversion applies the transformation rules during the launch of schema conversion, make sure that you close schema conversion and launch it again after you edit your rules.

**To edit transformation rules**

1. Sign in to the AWS Management Console, and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**, and then choose your migration project.

1. Choose **Schema conversion**, and then choose **Close schema conversion**.

1. After AWS DMS closes schema conversion, choose **Modify** to edit your migration project settings.

1. For **Transformation rules**, choose one of the following actions:
   + Choose **Duplicate** to duplicate an existing transformation rule and add it in the end of the list.
   + Choose **Remove** to remove an existing transformation rule.
   + Choose the existing transformation rule to edit it.

1. After you are done editing rules, choose **Save changes**.

1. On the **Migration projects** page, choose your project from the list. Choose **Schema conversion**, then choose **Launch schema conversion**.

# Converting database schemas in DMS Schema Conversion: step-by-step guide
<a name="schema-conversion-convert"></a>

After you create the migration project and connect to your source and target databases, you can convert your source database objects to a format compatible with your target database. DMS Schema Conversion displays your source database schema in the left panel in a tree-view format.

Each node of the database tree is *lazy loaded*. When you choose a node in the tree view, DMS Schema Conversion requests the schema information from your source database at that time. To load the schema information faster, choose your schema, and then choose **Load metadata** from the **Actions** menu. DMS Schema Conversion then reads the database metadata and stores the information on an Amazon S3 bucket. You can now browse the database objects faster.

You can convert the whole database schema, or you can choose any schema item from your source database to convert. If the schema item that you choose depends on a parent item, then DMS Schema Conversion also generates the schema for the parent item. For example, when you choose a table to convert, DMS Schema Conversion creates the converted table and the database schema that the table is in.

## Converting database objects
<a name="schema-conversion-convert-steps"></a>

You can use DMS Schema Conversion to convert an entire database schema or separate database schema objects.

**To convert an entire database schema**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project, and then choose **Schema conversion**.

1. Choose **Launch schema conversion**. The **Schema conversion** page opens.

1. In the source database pane, select the check box for the schema name.

1. Choose this schema in the left pane of the migration project. DMS Schema Conversion highlights the schema name in blue and activates the **Actions** menu.

1. For **Actions**, choose **Convert**. The conversion dialog box appears.

1. Choose **Convert** in the dialog box to confirm your choice.

**To convert your source database objects**

1. Sign in to the AWS Management Console, and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project, and then choose **Schema conversion**.

1. Choose **Launch schema conversion**. The **Schema conversion** page opens.

1. In the source database pane, select your source database objects.

1. After you select all check boxes for the objects that you want to convert, choose the parent node for all selected objects in your left panel.

   DMS Schema Conversion highlights the parent node in blue and activates the **Actions** menu.

1. For **Actions**, choose **Convert**. The conversion dialog box appears.

1. Choose **Convert** in the dialog box to confirm your choice.

For example, to convert two out of 10 tables, select the check boxes for the two tables that you want to convert. Notice that the **Actions** menu is inactive. After you choose the **Tables** node, DMS Schema Conversion highlights its name in blue and activates the **Actions** menu. Then you can choose **Convert** from this menu.

Likewise, to convert two tables and three procedures, select the check boxes for the object names. Then, choose the schema node to activate the **Actions** menu, and choose **Convert schema**.

## Editing and saving your converted SQL code
<a name="schema-conversion-convert-editsql"></a>

The **Schema conversion** page allows you to edit converted SQL code in your database objects. Use the following procedure to edit your converted SQL code, apply the changes, and then save them.

**To edit, apply changes to, and save your converted SQL code**

1. In the **Schema conversion** page, open the tree view in the **Source data providers** pane to display a code object.  
![\[Schema conversion: Source data providers tree view\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-sc-editsql-1.png)

1. From the **Source data providers** pane, choose **Actions**, **Convert**. Confirm the action.

1. When the conversion completes, to view the converted SQL, expand the center pane if needed. To edit the converted SQL, choose the edit icon in the **Target SQL** pane.  
![\[Schema conversion: Edit target SQL\]](http://docs.aws.amazon.com/dms/latest/userguide/images/datarep-sc-editsql-2.png)

1. After you edit the target SQL, confirm your changes by choosing the check icon at the top of the page. Confirm the action.

1. In the **Target data providers** pane, choose **Actions**, **Apply changes**. Confirm the action.

1. DMS writes the edited procedure to the target data store.

## Reviewing converted database objects
<a name="schema-conversion-сonvert-review"></a>

After you have converted your source database objects, you can choose an object in the left pane of your project. You can then view the source and converted code for that object. DMS Schema Conversion automatically loads the converted code for the object that you selected in the left pane. You can also see the properties or parameters of the object that you selected.

DMS Schema Conversion automatically stores the converted code as part of your migration project. It doesn't apply these code changes to your target database. For more information about applying converted code to your target database, see [Applying your converted code](schema-conversion-save-apply.md#schema-conversion-apply). To remove the converted code from your migration project, select your target schema in the right pane, and then choose **Refresh from database** from **Actions**. 

After you have converted your source database objects, you can see the conversion summary and action items in the lower-center pane. You can see the same information when you create an assessment report. The assessment report is useful for identifying and resolving schema items that DMS Schema Conversion can't convert. You can save the assessment report summary and the list of conversion action items in CSV files. For more information, see [Database migration assessment reports](assessment-reports.md).

# Converting database objects with generative AI
<a name="schema-conversion-convert.databaseobjects"></a>

The DMS Schema Conversion with generative AI feature streamlines the database migration process by offering recommendations to help you convert previously unconverted code objects that typically require complex manual conversion. This feature is available for Oracle to PostgreSQL/Aurora PostgreSQL, SQL Server to PostgreSQL/Aurora PostgreSQL and SAP ASE (Sybase ASE) to PostgreSQL/Aurora PostgreSQL conversions. You can convert an entire database schema or individual database schema objects.

**Warning**  
Generative AI features in DMS Schema Conversion use cross-region inference. For more information, see [Cross-region inference in DMS Schema Conversion](CHAP_Security.DataProtection.CrossRegionInference.md#CHAP_Security.DataProtection.CrossRegionInference.SchemaConversion).

To convert your source database objects with generative AI, follow steps 1 to 6 in [Converting database objects ](schema-conversion-convert.md#schema-conversion-convert-steps) then continue with one of these two methods:
+ Method 1: From the **Actions** menu, select **Convert**. In the conversion dialog box that appears, enable the **Convert schema with Generative AI** option and click **Convert**.
+ Method 2: Click ****Convert schema with Generative AI**** in the top right corner. In the conversion dialog box, ensure the option is enabled and click **Convert**.

To manually adjust this setting at any time in DMS Schema Conversion console:
+ Navigate to the **Settings** tab.
+ In the **Conversion settings section**, enable the **Generative AI** option to approve the use of generative AI.

**Note**  
Supported regions:  
Asia Pacific (Tokyo) (ap-northeast-1)
Asia Pacific (Osaka) (ap-northeast-3)
Asia Pacific (Sydney) (ap-southeast-2)
Canada (Central) (ca-central-1)
Europe (Frankfurt) (eu-central-1)
Europe (Stockholm) (eu-north-1)
Europe (Ireland) (eu-west-1)
Europe (London) (eu-west-2)
Europe (Paris) (eu-west-3)
US East (N. Virginia) (us-east-1)
US East (Ohio) (us-east-2)
US West (Oregon) (us-west-2)

**Note**  
Supported conversion paths:  
Oracle to Amazon RDS for PostgreSQL
Oracle to Amazon Aurora PostgreSQL
Microsoft SQL Server to Amazon RDS for PostgreSQL
Microsoft SQL Server to Amazon Aurora PostgreSQL
SAP ASE (Sybase ASE) to Amazon RDS for PostgreSQL
SAP ASE (Sybase ASE) to Amazon Aurora PostgreSQL
IBM Db2 for Linux, UNIX and Windows (LUW) to Amazon RDS for PostgreSQL
IBM Db2 for Linux, UNIX and Windows (LUW) to Amazon Aurora PostgreSQL
IBM Db2 for z/OS to Amazon RDS for PostgreSQL
IBM Db2 for z/OS to Amazon Aurora PostgreSQL

## Scope of Generative AI conversion
<a name="schema-conversion-convert.databaseobjects.genai"></a>

Generative AI-assisted schema conversion focuses on specific SQL elements with designated action items. All other SQL elements are converted using default rule-based approaches. The SQL elements within the extended scope of Generative AI conversion include:


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/schema-conversion-convert.databaseobjects.html)

### Limitations
<a name="schema-conversion-convert.databaseobjects.limitations"></a>

The Converting database objects with generative AI feature has the following limitations:
+ Database endpoints supporting generative AI conversion are not visible in the AWS Console. You can view them only by exporting the assessment report as a PDF or CSV file.
+ As a probabilistic system, generative AI-assisted Schema Conversion may not achieve 100 percent accuracy in all conversions. It can also produce different results for the same SQL statements over a period of time. You must review and validate all conversion outputs.
+ Generative AI conversion is not supported for:
  + DEFAULT constraint in a table
  + DEFAULT value for a function or procedure parameter
  + COMPUTE COLUMN in a table
  + TRIGGER
  + COLUMN DATA TYPE
  + Dynamic SQL
  + INDEX
  + CONSTRAINT
+ If the source statement is converted with multiple action items and at least one action item is processed using generative AI, then all Action Items are replaced by one action item 5444 on a target for Oracle and 7744 for Microsoft SQL Server. The action item 9997 is an exception that is saved after processed using generative AI.

**Warning**  
Conversion using generative AI takes longer than basic conversion.

Every AWS account have a per-minute quota limiting the number of SQL statements that can be converted using generative AI. Statements exceeding this limit are queued for processing in subsequent minutes. The quota is as follows:


****  

| Region | SQL Statements per AWS account per minute | 
| --- | --- | 
|  Asia Pacific (Tokyo) (ap-northeast-1) Asia Pacific (Osaka) (ap-northeast-3) Asia Pacific (Sydney) (ap-southeast-2) Canada (Central) (ca-central-1) Europe (Stockholm) (eu-north-1) Europe (Ireland) (eu-west-1) Europe (London) (eu-west-2) Europe (Paris) (eu-west-3) US East (Ohio) (us-east-2)  | Up to 24 statements | 
|  Europe (Frankfurt) (eu-central-1) US East (N. Virginia) (us-east-1) US West (Oregon) (us-west-2)  | Up to 40 statements | 

# Specifying schema conversion settings for migration projects
<a name="schema-conversion-settings"></a>

After you create a migration project, you can specify conversion settings in DMS Schema Conversion. Configuring your schema conversion settings improves the performance of the converted code.

**To edit conversion settings**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project. Choose **Schema conversion**, then **Launch schema conversion**.

1. Choose **Settings**. The **Settings** page opens.

1. In the **Conversion** section, change the settings.

1. Choose **Apply**, and then choose **Schema conversion**.

For all conversion pairs, you can limit the number of comments with action items in the converted code. To limit the number of comments in the converted code, open the conversion settings in your migration project.

For the **Comments in converted SQL code**, choose the severity level of action items. DMS Schema Conversion adds comments in the converted code for action items of the selected severity and higher. For example, to minimize the number of comments in your converted code, choose **Errors only**.

To include comments for all action items in your converted code, choose **All messages**.

Other conversion settings are different for each pair of source and target databases.

**Topics**
+ [Understanding Oracle to MySQL conversion settings](schema-conversion-oracle-mysql.md)
+ [Understanding Oracle to PostgreSQL conversion settings](schema-conversion-oracle-postgresql.md)
+ [Understanding SQL Server to MySQL conversion settings](schema-conversion-sql-server-mysql.md)
+ [Understanding SQL Server to PostgreSQL conversion settings](schema-conversion-sql-server-postgresql.md)
+ [Understanding PostgreSQL to MySQL conversion settings](schema-conversion-postgresql-mysql.md)
+ [Understanding IBM Db2 for LUW to Amazon RDS for PostgreSQL conversion settings](schema-conversion-db2-luw-postgresql.md)
+ [Understanding IBM Db2 for z/OS to Amazon RDS for Db2 conversion settings](schema-conversion-db2-zos-db2.md)
+ [Understanding IBM Db2 for z/OS to Amazon RDS for PostgreSQL conversion settings](schema-conversion-db2-zos-postgresql.md)
+ [Understanding SAP ASE (Sybase ASE) to PostgreSQL conversion settings](schema-conversion--sybase-ASE.md)

# Understanding Oracle to MySQL conversion settings
<a name="schema-conversion-oracle-mysql"></a>

Oracle to MySQL conversion settings in DMS Schema Conversion include the following:
+ Your source Oracle database can use the `ROWID` pseudocolumn. MySQL doesn't support similar functionality. DMS Schema Conversion can emulate the `ROWID` pseudocolumn in the converted code. To do so, turn on the **Generate row ID** option.

  If your source Oracle code doesn't use the `ROWID` pseudocolumn, turn off the **Generate row ID** option. In this case, the converted code works faster.
+ Your source Oracle code can include the `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions with parameters that MySQL doesn't support. By default, DMS Schema Conversion emulates the usage of these parameters in the converted code.

  You can use native MySQL `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions when your source Oracle code lacks parameters that are unsupported by MySQL. In this case, the converted code works faster. To do so, select the following values:
  + **Use a native MySQL TO\$1CHAR function**
  + **Use a native MySQL TO\$1DATE function**
  + **Use a native MySQL TO\$1NUMBER function**
+ Your database and applications can run in different time zones. By default, DMS Schema Conversion emulates time zones in the converted code. However, you don't need this emulation when your database and applications use the same time zone. In this case, select **Improve the performance of the converted code where the database and applications use the same time zone**.

# Understanding Oracle to PostgreSQL conversion settings
<a name="schema-conversion-oracle-postgresql"></a>

Oracle to PostgreSQL conversion settings in DMS Schema Conversion include the following:
+ To convert applicable database objects using generative AI, enable the **Generative AI** setting. Objects successfully converted using generative AI will be clearly identified with **Action Item 5444**, which states: "This conversion uses machine learning models that generate predictions based on patterns in data. For more information, see [Converting database objects with generative AI](schema-conversion-convert.databaseobjects.md).
  + You can convert schemas for different supported versions of your target database, not just the latest version. To specify a version, use the **Target engine version** setting. Available conversion options will vary depending on the engine version you select. Currently, PostgreSQL versions 14 and 15 are supported (they have identical settings). Note that using a target engine version during schema conversion that differs from the version specified in this setting may lead to compatibility issues and errors when performing the Apply to the target operation.
+ AWS DMS can convert Oracle materialized views to tables or materialized views on PostgreSQL. For **Materialized views**, choose how to convert your source materialized views.
+ Your source Oracle database can use the `ROWID` pseudocolumn. PostgreSQL doesn't support similar functionality. DMS Schema Conversion can emulate the `ROWID` pseudocolumn in the converted code using the `bigint` or `character varying` data type. To do so, choose **Use the bigint data type to emulate the ROWID pseudocolumn** or **Use the character varying data type to emulate the ROWID pseudocolumn** for **Row ID**.
  + To convert NUMBER data type to the best fitting numeric data type, depending on the precision and scale you can select Use the optimized data type mapping for columns of the NUMBER data type.
+ If your source Oracle code doesn't use the `ROWID` pseudocolumn, choose **Don't generate**. In this case, the converted code works faster.
+ Your source Oracle code can include the `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions with parameters that PostgreSQL doesn't support. By default, DMS Schema Conversion emulates the usage of these parameters in the converted code.

  You can use native PostgreSQL `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions when your source Oracle code lacks parameters that are unsupported by PostgreSQL. In this case, the converted code works faster. To do so, select the following values:
  + **Use a native PostgreSQL TO\$1CHAR function**
  + **Use a native PostgreSQL TO\$1DATE function**
  + **Use a native PostgreSQL TO\$1NUMBER function**
+ Your database and applications can run in different time zones. By default, DMS Schema Conversion emulates time zones in the converted code. However, you don't need this emulation when your database and applications use the same time zone. In this case, select **Improve the performance of the converted code where the database and applications use the same time zone**.
+ To continue using sequences in your converted code, select **Populate converted sequences with the last value generated on the source side**.
+ In some cases, your source Oracle database might store only integer values in the primary or foreign key columns of the `NUMBER` data type. In these cases, AWS DMS can convert these columns to the `BIGINT` data type. This approach improves the performance of your converted code. To do so, select **Convert primary and foreign key columns of the NUMBER data type to the BIGINT data type**. Make sure that your source doesn't include floating point values in these columns to avoid data loss.
+ To skip deactivated triggers and constraints in your source code, choose **Convert only active triggers and constraints**.
+ You can use DMS Schema Conversion to convert string variables that are called as dynamic SQL. Your database code can change the values of these string variables. To make sure that AWS DMS always converts the latest value of this string variable, select **Convert the dynamic SQL code that is created in called routines**.
+ PostgreSQL versions 10 and earlier don't support procedures. If you aren't familiar with using procedures in PostgreSQL, AWS DMS can convert Oracle procedures to PostgreSQL functions. To do so, select **Convert procedures to functions**.
+ To see additional information about the occurred action items, you can add specific functions to the extension pack. To do so, select **Add extension pack functions that raise user-defined exceptions**. Then choose severity levels to raise user-defined exceptions. Make sure that you apply the extension pack schema after you convert your source database objects. For more information about extension packs, see [Using extension packs](extension-pack.md). 
+ Your source Oracle database can include constraints with the automatically generated names. If your source code uses these names, make sure that you select **Keep the names of system generated constraints**. If your source code uses these constraints, but doesn't use their names, clear this option to increase the conversion speed.
+ If your source and target databases run in different time zones, the function that emulates the `SYSDATE` built-in Oracle function returns different values compared to the source function. To make sure that your source and target functions return the same values, choose **Set the time zone of your source database**.
+ You can use the functions from the orafce extension in your converted code. To do so, for **Orafce built-in routines**, select the functions to use. For more information about orafce, see [orafce](https://github.com/orafce/orafce) on GitHub.

# Understanding SQL Server to MySQL conversion settings
<a name="schema-conversion-sql-server-mysql"></a>

SQL Server to MySQL conversion settings in DMS Schema Conversion include the following:
+ Your source SQL Server database can store the output of `EXEC` in a table. DMS Schema Conversion creates temporary tables and an additional procedure to emulate this feature. To use this emulation, select **Create additional routines to handle open datasets**.

# Understanding SQL Server to PostgreSQL conversion settings
<a name="schema-conversion-sql-server-postgresql"></a>

SQL Server to PostgreSQL conversion settings in DMS Schema Conversion include the following:
+ To convert applicable database objects using generative AI enable the **Generative AI** setting. Objects successfully converted using generative AI will be clearly identified with **Action Item 7744**, which states: "This conversion uses machine learning models that generate predictions based on patterns in data." For more information, see [Converting database objects with generative AI](schema-conversion-convert.databaseobjects.md).
+ In SQL Server, you can use indexes with the same name in different tables. However, in PostgreSQL, all index names that you use in the schema must be unique. To make sure that DMS Schema Conversion generates unique names for all your indexes, select **Generate unique names for indexes**.
+ PostgreSQL versions 10 and earlier don't support procedures. If you aren't familiar with using procedures in PostgreSQL, AWS DMS can convert SQL Server procedures to PostgreSQL functions. To do so, select **Convert procedures to functions**.
+ Your source SQL Server database can store the output of `EXEC` in a table. DMS Schema Conversion creates temporary tables and an additional procedure to emulate this feature. To use this emulation, select **Create additional routines to handle open datasets**.
+ You can define the template to use for the schema names in the converted code. For **Schema names**, choose one of the following options:
  + **DB** – Uses the SQL Server database name as a schema name in PostgreSQL.
  + **SCHEMA** – Uses the SQL Server schema name as a schema name in PostgreSQL.
  + **DB\$1SCHEMA** – Uses a combination of the SQL Server database and schema names as a schema name in PostgreSQL.
+ You can keep the letter case in the names of source operands. To avoid conversion of operand names to lowercase, select **Avoid casting to lowercase for case-sensitive operations**. This option is applicable only if the case-sensitivity feature is enabled in the source database.
+ You can keep the parameter names from your source database. DMS Schema Conversion can add double quotation marks to the names of parameters in the converted code. To do so, select **Keep original parameter names**.
+ You can keep a length of routine parameters from your source database. DMS Schema Conversion creates domains and uses them to specify a length of routine parameters. To do so, select **Preserve parameter lengths**.
+ To convert unsupported built-in objects to stub objects, enable the **Convert unsupported built-in objects to stub objects** setting:
  +  When enabled, DMS SC replaces unsupported built-in objects with corresponding stub objects in the target database. This feature converts code sections that would normally be enclosed by migration issue 7811 or 7904. It creates stub objects based on the type of the source built-in objects `PROCEDURE` for procedures, `VIEW` for views or tables.

    Conversion of a source database object with a call of an unsupported object results in a call of a stub object and migration issue 7822. 

    You can choose to create stub objects in a separate schema by enabling the **Create stub objects in a separate schema** option. When selected, stub objects are created in a special schema named `aws_sqlserver_stub` in the target database. If not selected, they are created in the same schema as the calling objects. 
  + Stub routines are named based on the fully qualified name of the original built-in. For stub views, the naming convention includes the system schema name `system_schema_name$builtin_view_name`. 

    During re-conversion, DMS SC checks for existing stub routines in the target database. If a routine with the same name and input parameters already exists, it is not overwritten. 

    After conversion, review and implement custom code for stub routines as needed.
+ `CITEXT` datatype for all string datatypes setting in DMS Schema Conversion include the following:
  + To use the `CITEXT` datatype for case-insensitive string operations when converting from SQL Server to PostgreSQL, enable the **Use CITEXT for all string datatypes** setting. This option helps maintain consistent behavior when migrating from a case-insensitive SQL Server to a case-sensitive PostgreSQL environment.
  + When enabled, DMS SC converts all relevant string datatypes from the source SQL Server database to `CITEXT` in PostgreSQL. This eliminates the need for explicit LOWER () function calls in conditions and automatically casts string expressions in conditional operations to `CITEXT`.
  + To determine if your SQL Server instance is case-sensitive, run the following query:

    ```
    SELECT SERVERPROPERTY('COLLATION');
    ```

    A result containing 'CI' indicates case-insensitive, while 'CS' indicates case-sensitive.
  + The `CITEXT` conversion may not apply in scenarios where explicit case-sensitive collate settings are used at the server, database, or column level.

    To use this feature, ensure that the `CITEXT` module is installed and available in your target PostgreSQL database.
  + When using the `CITEXT` datatype conversion, consider the following best practices:
    + Enable this feature when migrating from a case-insensitive SQL Server to maintain consistent behavior in PostgreSQL.
    + Review your application code to ensure it doesn't rely on case-sensitive string operations.
    + Thoroughly test your application after migration to verify that case-insensitive behavior is maintained as expected.

# Understanding PostgreSQL to MySQL conversion settings
<a name="schema-conversion-postgresql-mysql"></a>

PostgreSQL to MySQL conversion settings in DMS Schema Conversion include the following:
+ **Comments in converted SQL code**: This setting includes comments in the converted code for the action items of the selected severity and higher. This setting supports the following values: 
  + Errors only
  + Errors and warnings
  + All messages

# Understanding IBM Db2 for LUW to Amazon RDS for PostgreSQL conversion settings
<a name="schema-conversion-db2-luw-postgresql"></a>

**Note**  
The AWS Management Console does not support creating migration projects in DMS Schema Conversion that use IBM Db2 for LUW as a source data provider with Amazon RDS for PostgreSQL as a target data provider. Use the AWS CLI or DMS Schema Conversion API instead. 

IBM Db2 for Linux, UNIX and Windows to Amazon RDS for PostgreSQL conversion settings in DMS Schema Conversion include the following:
+  **Comments in converted SQL code**: This setting includes comments in the converted code for the action items of the selected severity and higher. This setting supports the following values: 
  + Errors only
  + Errors and warnings
  + All messages

# Understanding IBM Db2 for z/OS to Amazon RDS for Db2 conversion settings
<a name="schema-conversion-db2-zos-db2"></a>

IBM for z/OS to Amazon RDS for DB2 conversion settings in DMS Schema Conversion include the following:
+ **Comments in converted SQL code**: This setting includes comments in the converted code for the action items of the selected severity and higher. This setting supports the following values: 
  + Errors only
  + Errors and warnings
  + All messages

# Understanding IBM Db2 for z/OS to Amazon RDS for PostgreSQL conversion settings
<a name="schema-conversion-db2-zos-postgresql"></a>

**Note**  
The AWS Management Console does not support creating migration projects in DMS Schema Conversion that use IBM Db2 for z/OS as a source data provider with Amazon RDS for PostgreSQL as a target data provider. Use the AWS CLI or DMS Schema Conversion API instead. 

IBM Db2 for z/OS to Amazon RDS for PostgreSQL conversion settings in DMS Schema Conversion include the following:
+  **Comments in converted SQL code**: This setting includes comments in the converted code for the action items of the selected severity and higher. This setting supports the following values: 
  + Errors only
  + Errors and warnings
  + All messages

# Understanding SAP ASE (Sybase ASE) to PostgreSQL conversion settings
<a name="schema-conversion--sybase-ASE"></a>

SAP ASE (Sybase ASE) to PostgreSQL conversion settings in DMS Schema Conversion include the following:
+ To convert applicable database objects using generative AI enable the **Generative AI** setting. Objects successfully converted using generative AI will be clearly identified with **Action Item 3077**, which states: "This conversion uses machine learning models that generate predictions based on patterns in data." For more information, see [Converting database objects with generative AI](schema-conversion-convert.databaseobjects.md).
+ **Comments in converted SQL code:** This setting includes comments in the converted code for the action items of the selected severity and higher. This setting supports the following values: 
  + Errors only
  + Errors and warnings
  + All messages
+ You can define the template to use for the schema names in the converted code. For **Schema names**, choose one of the following options:
  + **DB** – Uses the SAP ASE (Sybase ASE) database name as a schema name in PostgreSQL.
  + **SCHEMA** – Uses the SAP ASE (Sybase ASE) schema name as a schema name in PostgreSQL.
  + **DB\$1SCHEMA** – Uses a combination of the SAP ASE (Sybase ASE) database and schema names as a schema name in.
+ You can keep the exact case of object names from your source database. To do so, select Treat source database object names as case sensitive. When this option is turned off, all database object names are converted to lowercase.
+ You can avoid casting operands to lowercase during case-insensitive operations. DMS Schema Conversion will not apply the LOWER function to operands in the converted code. To do so, select **Avoid casting operands to lowercase for case-insensitive operations**. When this option is not selected (default), DMS Schema Conversion automatically applies the LOWER function to convert operands to lowercase before performing case-insensitive comparisons.
+ In SAP ASE (Sybase ASE), you can create indexes with identical names across different tables. PostgreSQL requires all index names within a schema to be unique. To ensure AWS Schema Conversion creates unique index names during migration, select **Generate unique names for indexes**. This option automatically adds prefixes to index names to prevent naming conflicts in your target PostgreSQL database.

# Refreshing your database schemas in DMS Schema Conversion
<a name="schema-conversion-refresh"></a>

After you create a migration project, DMS Schema Conversion stores the information about your source and target schemas in this project. DMS Schema Conversion uses *lazy loading* to load metadata only as it is needed, such as when you choose a node in your database tree. You can use *eager loading* to load the schema information faster. To do so, choose your schema, and then choose **Load metadata** from **Actions**.

After you automatically or manually load the object to your migration project, DMS Schema Conversion doesn't use lazy loading again. So when you change objects, such as tables and procedures in your database, make sure to refresh them in your migration project.

To refresh schemas from the database, select the objects that you want to refresh, and choose **Refresh from database** from **Actions**. You can refresh database objects in your source and target database schemas:
+ **Source** – If you update your source database schema, choose **Refresh from database** to replace the schema in your project with the latest schema from your source database.
+ **Target** – If you update the schema for your target database, DMS Schema Conversion replaces the schema in your project with the latest schema from your target database. DMS Schema Conversion replaces your converted code with the code from your target database. Make sure that you applied the converted code to your target database before you choose **Refresh from database**. Otherwise, convert your source database schema again.

# Saving and applying your converted code in DMS Schema Conversion
<a name="schema-conversion-save-apply"></a>

After DMS Schema Conversion converts your source database objects, it doesn't immediately apply the converted code to your target database. Instead, DMS Schema Conversion stores the converted code in your project until you are ready to apply it to your target database. 

Before you apply the converted code, you can update your source database code and convert the updated objects again to address the existing action items. For more information about items that DMS Schema Conversion can't convert automatically, see [Creating database migration assessment reports with DMS Schema Conversion](assessment-reports.md). For more information about refreshing your source database objects in migration project for DMS Schema Conversion, see [Refreshing your database schemas](schema-conversion-refresh.md).

Instead of applying the converted code directly to your database in DMS Schema Conversion, you can save the code to a file as a SQL script. You can review these SQL scripts, edit them where necessary, and then manually apply these SQL scripts to your target database.

## Saving your converted code to a SQL file
<a name="schema-conversion-save"></a>

You can save your converted schema as SQL scripts in a text file. You can modify the converted code to address action items that DMS Schema Conversion can't convert automatically. You can then run your updated SQL scripts on your target database to apply the converted code to your target database.

**To save your converted schema as SQL scripts**

1. Sign in to the AWS Management Console and open the AWS DMS console at [https://console.aws.amazon.com/dms/v2/](https://console.aws.amazon.com/dms/v2/).

1. Choose **Migration projects**. The **Migration projects** page opens.

1. Choose your migration project, and then choose **Schema conversion**.

1. Choose **Launch schema conversion**. The **Schema conversion** page opens.

1. In the right pane, choose the target database schema or select the converted objects that you want to save. Make sure that DMS Schema Conversion highlights the parent node name in blue and activates the **Actions** menu for the target database.

1. Choose **Save as SQL** for **Actions**. The **Save** dialog box appears. 

1. Choose **Save as SQL** to confirm your choice. 

   DMS Schema Conversion creates an archive with SQL files and stores this archive in your Amazon S3 bucket. 

1. (Optional) Change the S3 bucket for the archive by editing the schema conversion settings in your instance profile.

1. Open the SQL scripts from your S3 bucket.

## Applying your converted code
<a name="schema-conversion-apply"></a>

When you are ready to apply your converted code to your target database, choose the database objects in the right pane of your project. You can apply changes to an entire database schema or selected database schema objects.

After you select the database objects, DMS Schema Conversion highlights the name of the selected node or the parent node in blue. It then activates the **Actions** menu. Choose **Apply changes** for **Actions**. In the dialog box that appears, choose **Apply** to confirm your choice and apply the converted code to your target database.

## Applying the extension pack schema
<a name="schema-conversion-save-apply-extension-pack"></a>

When you apply your converted schema to your target database for the first time, DMS Schema Conversion might also apply the extension pack schema. The extension pack schema emulates system functions of the source database that are required to run your converted code for your target database. If your converted code uses the functions of the extension pack, make sure that you apply the extension pack schema. 

To apply the extension pack to your target database manually, choose **Apply changes** for **Actions**. In the dialog box that appears, choose **confirm** to apply the extension pack to your target database.

We recommend that you don't modify the extension pack schema to avoid unexpected results in the converted code.

For more information, see [Using extension packs in DMS Schema Conversion](extension-pack.md).

# Converting embedded SQL in Java applications
<a name="schema-conversion-embedded-sql"></a>

When you use AWS DMS and DMS Schema Conversion to migrate a database, you might need to convert the embedded SQL in your application to be compatible with your target database. Rather than converting it manually, you can use Amazon Q in the IDE to automate the conversion. Amazon Q uses metadata from a DMS Schema Conversion to convert embedded SQL in your application to a version that is compatible with your target database. Amazon Q will detect Oracle SQL code in your application and convert them into PostgreSQL syntax. For more information, see [Converting embedded SQL in Java applications with Amazon Q Developer](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/transform-sql.html).

**Downloading metadata file for embedded SQL conversion in a Java Application**

1. Once you complete the conversion, close the project and go to the S3 bucket where your project is stored.

1. Open the folder and find the project folder (“sct-project”).

1. Download the object inside the project folder. This will be a zip file.

1. Use the downloaded zip file as an input to your transformation with Amazon Q: [Converting embedded SQL in Java applications with Amazon Q Developer](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/transform-sql.html).

# Using extension packs in DMS Schema Conversion
<a name="extension-pack"></a>

An *extension pack* in DMS Schema Conversion is an add-on module that emulates source database functions that aren't supported in the target database. Use an extension pack to make sure that the converted code produces the same results as the source code. Before you can install an extension pack, convert your database schemas.

Each extension pack includes a database schema. This schema includes SQL functions, procedures, tables, and views for emulating specific online transaction processing (OLTP) objects or unsupported built-in functions from the source database.

When you convert your source database, DMS Schema Conversion adds an additional schema to your target database. This schema implements SQL system functions of the source database that are required to run your converted code on your target database. This additional schema is called the extension pack schema.

The extension pack schema is named according to your source database as follows:
+ Microsoft SQL Server – `aws_sqlserver_ext`
+ Oracle – `aws_oracle_ext`

You can apply extension packs in two ways:
+ DMS Schema Conversion can automatically apply an extension pack when you apply your converted code. DMS Schema Conversion applies the extension pack before it applies all other schema objects.
+ You can apply an extension pack manually. To do so, choose the extension pack schema in your target database tree, and then choose **Apply**, then **Apply extension pack**.

# AWS IAM actions to API mapping for DMS Schema Conversion and Common Studio Framework (CSF)
<a name="schema-conversion-iam.mapping"></a>

When setting up access control and writing IAM permissions policies for DMS Schema Conversion and Common Studio Framework, it is important to understand how API actions map to IAM permissions. While some actions share identical names across both interfaces, others differ significantly.

The following table lists the correct mapping between API operations and IAM actions:


**API to IAM Mapping**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/userguide/schema-conversion-iam.mapping.html)