

# Connecting to source databases with the AWS Schema Conversion Tool
<a name="CHAP_Source"></a>

AWS Schema Conversion Tool (AWS SCT) can convert schemas from the following source databases and data warehouses to a target database or data warehouse. For information about permissions, connections, and what AWS SCT can convert for use with the target database or data warehouse, see the details in the following topics.

**Encryption information**  
[Connecting to encrypted Amazon RDS and and Aurora ](CHAP_Source.Encrypt.RDS.md)

**Database sources**
+ [Connecting to Apache Cassandra](CHAP_Source.Cassandra.md)
+ [Connecting to Azure SQL](CHAP_Source.AzureSQL.md)
+ [Connecting to IBM DB2 for z/OS](CHAP_Source.DB2zOS.md)
+ [IBM Db2 LUW databases](CHAP_Source.DB2LUW.md)
+ [Using MySQL as a source](CHAP_Source.MySQL.md)
+ [Oracle databases](CHAP_Source.Oracle.md)
+ [PostgreSQL databases](CHAP_Source.PostgreSQL.md)
+ [SAP databases](CHAP_Source.SAP.md)
+ [SQL Server databases](CHAP_Source.SQLServer.md)

**Data warehouse sources**
+ [Amazon Redshift](CHAP_Source.Redshift.md)
+ [Azure Synapse Analytics as a source](CHAP_Source.AzureSynapse.md)
+ [BigQuery as a source](CHAP_Source.BigQuery.md)
+ [Greenplum databases](CHAP_Source.Greenplum.md)
+ [Netezza databases](CHAP_Source.Netezza.md)
+ [Oracle data warehouse](CHAP_Source.OracleDW.md)
+ [Snowflake](CHAP_Source.Snowflake.md)
+ [SQL Server Data Warehouses](CHAP_Source.SQLServerDW.md)
+ [Teradata databases](CHAP_Source.Teradata.md)
+ [Vertica databases](CHAP_Source.Vertica.md)

**Big data sources**
+ [Connecting to Apache Hadoop](CHAP_Source.Hadoop.md)
+ [Connecting to Apache Oozie](CHAP_Source.Oozie.md)

# Connecting to encrypted Amazon Relational Database Service and Amazon Aurora databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.Encrypt.RDS"></a>

To open encrypted connections to Amazon RDS or Amazon Aurora databases from an application, you need to import AWS root certificates into some form of key storage. You can download the root certificates from AWS at [Using SSL/TLS to encrypt a connection to a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html) in the *Amazon RDS User Guide*. 

Two options are available, a root certificate that works for all AWS Regions and a certificate bundle that contains both the old and new root certificates.

Depending on which you want to use, follow the steps in one of the two following procedures.

**To import the certificate or certificates into the Windows system storage**

1. Download a certificate or certificates from one of the following sources:

   For information about downloading certificates, see [Using SSL/TLS to encrypt a connection to a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html) in the *Amazon RDS User Guide*.

1. In your Windows search window, enter **Manage computer certificates**. When prompted as to whether to let the application make changes to your computer, choose **Yes**.

1. When the certificates window opens, if needed expand **Certificates - Local Computer** so you can see the list of certificates. Open the context (right-click) menu for **Trusted Root Certification Authorities**, then choose **All Tasks**, **Import**.

1. Choose **Next**, then **Browse**, and find the `*.pem` file that you downloaded in step 1. Choose **Open** to select the certificate file, choose **Next**, and then choose **Finish**.
**Note**  
To find the file, change the file type in the browse window to **All files (\$1.\$1)**, because `.pem` is not a standard certificate extension.

1. In the Microsoft Management Console, expand **Certificates**. Then expand **Trusted Root Certification Authorities**, choose **Certificates**, and find the certificate to confirm that it exists. The name of the certificate starts with `Amazon RDS`.

1. Restart your computer.

**To import the certificate or certificates into the Java KeyStore**

1. Download the certificate or certificates from one of the following sources:

   For information about downloading certificates, see [Using SSL/TLS to encrypt a connection to a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html) in the *Amazon RDS User Guide*.

1. If you downloaded the certificate bundle, split it into individual certificates files. To do so, place each certificate block, beginning with `-----BEGIN CERTIFICATE-----` and ending with `-----END CERTIFICATE-----` into a separate `*.pem` files. After you have created a separate `*.pem` file for each certificate, you can safely remove the certificate bundle file.

1. Open a command window or terminal session in the directory where you downloaded the certificate, and run the following command for every `*.pem` file that you created in the previous step.

   ```
   keytool -importcert -file <filename>.pem -alias <filename>.pem -keystore storename
   ```  
**Example**  

   The following example assumes that you downloaded the `eu-west-1-bundle.pem` file.

   ```
   keytool -importcert -file eu-west-1-bundle.pem -alias eu-west-1-bundle.pem -keystore trust-2019.ks
   Picked up JAVA_TOOL_OPTIONS: -Dlog4j2.formatMsgNoLookups=true
   Enter keystore password:
   Re-enter new password:
   Owner: CN=Amazon RDS Root 2019 CA, OU=Amazon RDS, O="Amazon Web Services, Inc.", ST=Washington, L=Seattle, C=US
   Issuer: CN=Amazon RDS Root 2019 CA, OU=Amazon RDS, O="Amazon Web Services, Inc.", ST=Washington, L=Seattle, C=US
   Serial number: c73467369250ae75
   Valid from: Thu Aug 22 19:08:50 CEST 2019 until: Thu Aug 22 19:08:50 CEST 2024
   Certificate fingerprints:
            SHA1: D4:0D:DB:29:E3:75:0D:FF:A6:71:C3:14:0B:BF:5F:47:8D:1C:80:96
            SHA256: F2:54:C7:D5:E9:23:B5:B7:51:0C:D7:9E:F7:77:7C:1C:A7:E6:4A:3C:97:22:E4:0D:64:54:78:FC:70:AA:D0:08
   Signature algorithm name: SHA256withRSA
   Subject Public Key Algorithm: 2048-bit RSA key
   Version: 3
   
   Extensions:
   
   #1: ObjectId: 2.5.29.35 Criticality=false
   AuthorityKeyIdentifier [
   KeyIdentifier [
   0000: 73 5F 60 D8 BC CB 03 98   F4 2B 17 34 2E 36 5A A6  s_`......+.4.6Z.
   0010: 60 FF BC 1F                                        `...
   ]
   ]
   
   #2: ObjectId: 2.5.29.19 Criticality=true
   BasicConstraints:[
     CA:true
     PathLen:2147483647
   ]
   
   #3: ObjectId: 2.5.29.15 Criticality=true
   KeyUsage [
     Key_CertSign
     Crl_Sign
   ]
   
   #4: ObjectId: 2.5.29.14 Criticality=false
   SubjectKeyIdentifier [
   KeyIdentifier [
   0000: 73 5F 60 D8 BC CB 03 98   F4 2B 17 34 2E 36 5A A6  s_`......+.4.6Z.
   0010: 60 FF BC 1F                                        `...
   ]
   ]
   
   Trust this certificate? [no]:  yes
   Certificate was added to keystore
   ```

1. Add the keystore as a trust store in AWS SCT. To do so, from the main menu choose **Settings**, **Global settings**, **Security**, **Trust store**, and then choose **Select existing trust store**. 

   After adding the trust store, you can use it to configure an SSL enabled connection when you create an AWS SCT connection to the database. In the AWS SCT **Connect to database** dialog, choose **Use SSL** and choose the trust store entered previously.

# Connecting to Apache Cassandra databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.Cassandra"></a>

You can use AWS SCT to convert keyspaces from Apache Cassandra to Amazon DynamoDB. 

## Connecting to Apache Cassandra as a source
<a name="CHAP_Source.Cassandra.Connecting"></a>

Use the following procedure to connect to your Apache Cassandra source database with the AWS Schema Conversion Tool. 

**To connect to an Apache Cassandra source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Cassandra**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Apache Cassandra source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Cassandra.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

# Connecting to Apache Hadoop databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.Hadoop"></a>

You can use the AWS SCT command line interface (CLI) to migrate from Apache Hadoop to Amazon EMR. AWS SCT uses your Amazon S3 bucket as a temporary storage for your data during migration.

AWS SCT supports as a source Apache Hadoop version 2.2.0 and higher. Also, AWS SCT supports Apache Hive version 0.13.0 and higher.

AWS SCT supports as a target Amazon EMR version 6.3.0 and higher. Also, AWS SCT supports as a target Apache Hadoop version 2.6.0 and higher, and Apache Hive version 0.13.0 and higher.

**Topics**
+ [Prerequisites for using Apache Hadoop as a source](#CHAP_Source.Hadoop.Prerequisites)
+ [Permissions for using Hive as a source](#CHAP_Source.Hadoop.Permissions)
+ [Permissions for using HDFS as a source](#CHAP_Source.Hadoop.PermissionsHDFS)
+ [Permissions for using HDFS as a target](#CHAP_Source.Hadoop.PermissionsHDFSTarget)
+ [Connecting to Apache Hadoop as a source](#CHAP_Source.Hadoop.Connecting)
+ [Connecting to your source Hive and HDFS services](#CHAP_Source.Hadoop.Hive)
+ [Connecting to Amazon EMR as a target](#CHAP_Source.Hadoop.Target)

## Prerequisites for using Apache Hadoop as a source
<a name="CHAP_Source.Hadoop.Prerequisites"></a>

The following prerequisites are required to connect to Apache Hadoop with the AWS SCT CLI.
+ Create an Amazon S3 bucket to store data during the migration. You can then copy data to Amazon EMR HDFS or use Amazon S3 as a data repository for your Hadoop workloads. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
+ Create an AWS Identity and Access Management (IAM) role with the `AmazonS3FullAccess` policy. AWS SCT uses this IAM role to access your Amazon S3 bucket.
+ Take a note of your AWS secret key and AWS secret access key. For more information about AWS access keys, see [Managing access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) in the *IAM User Guide*.
+ Create and configure a target Amazon EMR cluster. For more information, see [Getting started with Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-gs.html) in the *Amazon EMR Management Guide*.
+ Install the `distcp` utility on your source Apache Hadoop cluster. Also, install the `s3-dist-cp` utility on your target Amazon EMR cluster. Make sure that your database users have permissions to run these utilities.
+ Configure the `core-site.xml` file in your source Hadoop cluster to use the s3a protocol. To do so, set the `fs.s3a.aws.credentials.provider` parameter to one of the following values.
  + `org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider`
  + `org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider`
  + `org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider`
  + `org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider`

  You can add the following code example into the `core-site.xml` file.

  ```
  <property>
    <name>fs.s3a.aws.credentials.provider</name>
    <value>org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider</value>
  </property>
  ```

  The preceding example shows one of the four options from the preceding list of options. If you don't set the `fs.s3a.aws.credentials.provider` parameter in the `core-site.xml` file, AWS SCT chooses the provider automatically.

## Permissions for using Hive as a source
<a name="CHAP_Source.Hadoop.Permissions"></a>

The permissions required for a Hive source user are as follows:
+ `READ` access to the source data folders and to the source Amazon S3 bucket
+ `READ+WRITE` access to the intermediate and target Amazon S3 buckets

To increase the migration speed, we recommend that you run compaction for ACID-transactional source tabes.

The permissions required for an Amazon EMR Hive target user are as follows:
+ `READ` access to the target Amazon S3 bucket
+ `READ+WRITE` access to the intermediate Amazon S3 bucket
+ `READ+WRITE` access to the target HDFS folders

## Permissions for using HDFS as a source
<a name="CHAP_Source.Hadoop.PermissionsHDFS"></a>

The permissions required for HDFS as a source are as follows:
+ `EXECUTE` for the NameNode
+ `EXECUTE+READ` for all source folders and files that you include in the migration project
+ `READ+WRITE` for the `tmp` directory in the NameNode to run Spark jobs and store files before the migration to Amazon S3

In HDFS, all operations require traversal access. Traversal access demands the `EXECUTE` permission on all existing components of the path, except for the final path component. For example, for any operation accessing `/foo/bar/baz`, your user must have `EXECUTE` permission on `/`, `/foo`, and `/foo/bar`.

The following code example demonstrates how to grant `EXECUTE+READ` permissions for your source folders and files, and `READ+WRITE` permissions for the `tmp` directory.

```
hadoop fs –chmod –R 744 /user/hdfs-data
hadoop fs –chmod –R 766 /tmp
```

## Permissions for using HDFS as a target
<a name="CHAP_Source.Hadoop.PermissionsHDFSTarget"></a>

The permissions required for Amazon EMR HDFS as a target are as follows:
+ `EXECUTE` for the NameNode of the target Amazon EMR cluster
+ `READ+WRITE` for the target HDFS folders where you will store data after migration

## Connecting to Apache Hadoop as a source
<a name="CHAP_Source.Hadoop.Connecting"></a>

You can use Apache Hadoop as a source in AWS SCT version 1.0.670 or higher. You can migrate Hadoop clusters to Amazon EMR only in the AWS SCT command line interface (CLI). Before you start, familiarize yourself with the command line interface of AWS SCT. For more information, see [CLI Reference for AWS Schema Conversion Tool](CHAP_Reference.md).

**To connect to Apache Hadoop in the AWS SCT CLI**

1. Create a new AWS SCT CLI script or edit an existing scenario template. For example, you can download and edit the `HadoopMigrationTemplate.scts` template. For more information, see [Getting CLI scenarios](CHAP_Reference.md#CHAP_Reference.Scenario).

1. Configure the AWS SCT application settings such as the driver location and log folder.

   Download the required JDBC driver and specify the location where you store the file. For more information, see [Installing JDBC drivers for AWS Schema Conversion Tool](CHAP_Installing.JDBCDrivers.md).

   The following code example shows you how to add the path to the Apache Hive driver. After you run this code example, AWS SCT stores log files in the `c:\sct` folder.

   ```
   SetGlobalSettings
       -save: 'true'
       -settings: '{
           "hive_driver_file": "c:\\sct\\HiveJDBC42.jar",
           "log_folder": "c:\\sct",
           "console_log_folder": "c:\\sct"
       }'
   /
   ```

   You can use this example and the following examples in Windows.

1. Create a new AWS SCT project.

   The following code example creates the `hadoop_emr` project in the `c:\sct` folder.

   ```
   CreateProject
       -name: 'hadoop_emr'
       -directory: 'c:\sct'
   /
   ```

1. Add your source Hadoop cluster to the project.

   Use the `AddSourceCluster` command to connect to the source Hadoop cluster. Make sure that you provide values for the following mandatory parameters: `name`, `host`, `port`, and `user`. Other parameters are optional.

   The following code example adds the source Hadoop cluster. This example sets `HADOOP_SOURCE` as a name of the source cluster. Use this object name to add Hive and HDFS services to the project and create mapping rules.

   ```
   AddSourceCluster
       -name: 'HADOOP_SOURCE'
       -vendor: 'HADOOP'
       -host: 'hadoop_address'
       -port: '22'
       -user: 'hadoop_user'
       -password: 'hadoop_password'
       -useSSL: 'true'
       -privateKeyPath: 'c:\path\name.pem'
       -passPhrase: 'hadoop_passphrase'
   /
   ```

   In the preceding example, replace *hadoop\$1address* with the IP address of your Hadoop cluster. If needed, configure the value of the port option. Next, replace *hadoop\$1user* and *hadoop\$1password* with the name of your Hadoop user and the password for this user. For *path\$1name*, enter the name and path to the PEM file for your source Hadoop cluster.

1. Save your CLI script. Next, add the connection information for your Hive and HDFS services.

## Connecting to your source Hive and HDFS services
<a name="CHAP_Source.Hadoop.Hive"></a>

You can connect to your source Hive and HDFS services with the AWS SCT CLI. To connect to Apache Hive, use the Hive JDBC driver version 2.3.4 or higher. For more information, see [Installing JDBC drivers for AWS Schema Conversion Tool](CHAP_Installing.JDBCDrivers.md).

AWS SCT connects to Apache Hive with the `hadoop` cluster user. To do so, use the `AddSourceClusterHive` and `AddSourceClusterHDFS` commands. You can use one of the following approaches.
+ Create a new SSH tunnel.

  For `createTunnel`, enter **true**. For `host`, enter the internal IP address of your source Hive or HDFS service. For `port`, enter the service port of your Hive or HDFS service.

  Next, enter your Hive or HDFS credentials for `user` and `password`. For more information about SSH tunnels, see [Set up an SSH tunnel to the primary node using local port forwarding](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-ssh-tunnel-local.html) in the Amazon EMR Management Guide.
+ Use an existing SSH tunnel.

  For `host`, enter **localhost**. For `port`, enter the local port from the SSH tunnel parameters.
+ Connect to your Hive and HDFS services directly.

  For `host`, enter the IP address or hostname of your source Hive or HDFS service. For `port`, enter the service port of your Hive or HDFS service. Next, enter your Hive or HDFS credentials for `user` and `password`.

**To connect to Hive and HDFS in the AWS SCT CLI**

1. Open your CLI script which includes the connection information for your source Hadoop cluster. Make sure that you use the name of the Hadoop cluster that you defined in the previous step.

1. Add your source Hive service to the project.

   Use the `AddSourceClusterHive` command to connect the source Hive service. Make sure that you provide values for the following mandatory parameters: `user`, `password`, `cluster`, `name`, and `port`. Other parameters are optional.

   The following code example creates a tunnel for AWS SCT to work with your Hive service. This source Hive service runs on the same PC as AWS SCT. This example uses the `HADOOP_SOURCE` source cluster from the previous example.

   ```
   AddSourceClusterHive
       -cluster: 'HADOOP_SOURCE'
       -name: 'HIVE_SOURCE'
       -host: 'localhost'
       -port: '10005'
       -user: 'hive_user'
       -password: 'hive_password'
       -createTunnel: 'true'
       -localPort: '10005'
       -remoteHost: 'hive_remote_address'
       -remotePort: 'hive_port'
   /
   ```

   The following code example connects to your Hive service without a tunnel.

   ```
   AddSourceClusterHive
       -cluster: 'HADOOP_SOURCE'
       -name: 'HIVE_SOURCE'
       -host: 'hive_address'
       -port: 'hive_port'
       -user: 'hive_user'
       -password: 'hive_password'
   /
   ```

   In the preceding examples, replace *hive\$1user* and *hive\$1password* with the name of your Hive user and the password for this user.

   Next, replace *hive\$1address* and *hive\$1port* with the NameNode IP address and port of your source Hadoop cluster.

   For *hive\$1remote\$1address*, you might use the default value `127.0.0.1` or the NameNode IP address of your source Hive service.

1. Add your source HDFS service to the project.

   Use the `AddSourceClusterHDFS` command to connect the source HDFS service. Make sure that you provide values for the following mandatory parameters: `user`, `password`, `cluster`, `name`, and `port`. Other parameters are optional.

   Make sure that your user has the required permissions to migrate data from your source HDFS service. For more information, see [Permissions for using Hive as a source](#CHAP_Source.Hadoop.Permissions).

   The following code example creates a tunnel for AWS SCT to work with your Apache HDFS service. This example uses the `HADOOP_SOURCE` source cluster that you created before.

   ```
   AddSourceClusterHDFS
       -cluster: 'HADOOP_SOURCE'
       -name: 'HDFS_SOURCE'
       -host: 'localhost'
       -port: '9005'
       -user: 'hdfs_user'
       -password: 'hdfs_password'
       -createTunnel: 'true'
       -localPort: '9005'
       -remoteHost: 'hdfs_remote_address'
       -remotePort: 'hdfs_port'
   /
   ```

   The following code connects to your Apache HDFS service without a tunnel.

   ```
   AddSourceClusterHDFS
       -cluster: 'HADOOP_SOURCE'
       -name: 'HDFS_SOURCE'
       -host: 'hdfs_address'
       -port: 'hdfs_port'
       -user: 'hdfs_user'
       -password: 'hdfs_password'
   /
   ```

   In the preceding examples, replace *hdfs\$1user* and *hdfs\$1password* with the name of your HDFS user and the password for this user.

   Next, replace *hdfs\$1address* and *hdfs\$1port* with the NameNode IP address and port of your source Hadoop cluster.

   For *hdfs\$1remote\$1address*, you might use the default value `127.0.0.1` or the NameNode IP address of your source Hive service.

1. Save your CLI script. Next, add the connection information for your target Amazon EMR cluster, and the migration commands.

## Connecting to Amazon EMR as a target
<a name="CHAP_Source.Hadoop.Target"></a>

You can connect to your target Amazon EMR cluster with the AWS SCT CLI. To do so, you authorize inbound traffic and use SSH. In this case, AWS SCT has all required permissions to work with your Amazon EMR cluster. For more information, see [Before you connect](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-ssh-prereqs.html) and [Connect to the primary node using SSH](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node-ssh.html) in the Amazon EMR Management Guide.

AWS SCT connects to Amazon EMR Hive with the `hadoop` cluster user. To connect to Amazon EMR Hive, use the Hive JDBC driver version 2.6.2.1002 or higher. For more information, see [Installing JDBC drivers for AWS Schema Conversion Tool](CHAP_Installing.JDBCDrivers.md).

**To connect to Amazon EMR in the AWS SCT CLI**

1. Open your CLI script which includes the connection information for your source Hadoop cluster. Add the target Amazon EMR credentials into this file.

1. Add your target Amazon EMR cluster to the project.

   The following code example adds the target Amazon EMR cluster. This example sets `HADOOP_TARGET` as a name of the target cluster. Use this object name to add your Hive and HDFS services and an Amazon S3, bucket folder to the project and create mapping rules.

   ```
   AddTargetCluster
   	-name: 'HADOOP_TARGET'
   	-vendor: 'AMAZON_EMR'
   	-host: 'ec2-44-44-55-66.eu-west-1.EXAMPLE.amazonaws.com'
   	-port: '22'
   	-user: 'emr_user'
   	-password: 'emr_password'
   	-useSSL: 'true'
   	-privateKeyPath: 'c:\path\name.pem'
   	-passPhrase: '1234567890abcdef0!'
   	-s3Name: 'S3_TARGET'
   	-accessKey: 'AKIAIOSFODNN7EXAMPLE'
   	-secretKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
   	-region: 'eu-west-1'
   	-s3Path: 'doc-example-bucket/example-folder'
   /
   ```

   In the preceding example, enter your AWS resource names and Amazon EMR connection information. This includes the IP address of your Amazon EMR cluster, AWS access key, AWS secret access key, and Amazon S3 bucket. If needed, configure the value of the port variable. Next, replace *emr\$1user* and *emr\$1password* with the name of your Amazon EMR user and the password for this user. For *path\$1name*, enter the name and path to the PEM file for your target Amazon EMR cluster. For more information, see [Download PEM File for EMR Cluster Access](https://docs.aws.amazon.com/whitepapers/latest/teaching-big-data-skills-with-amazon-emr/download-pem-file-for-emr-cluster-access.html).

1. Add your target Amazon S3 bucket to the project.

   The following code example adds the target Amazon S3 bucket. This example uses the `HADOOP_TARGET` cluster that you created before.

   ```
   AddTargetClusterS3
   	-cluster: 'HADOOP_TARGET'
   	-Name: 'S3_TARGET'
   	-accessKey: 'AKIAIOSFODNN7EXAMPLE'
   	-secretKey: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
   	-region: 'eu-west-1'
   	-s3Path: 'doc-example-bucket/example-folder'
   /
   ```

   In the preceding example, enter your AWS access key, AWS secret access key, and Amazon S3 bucket.

1. Add your target Hive service to the project.

   The following code example creates a tunnel for AWS SCT to work with your target Hive service. This example uses the `HADOOP_TARGET` target cluster that you created before.

   ```
   AddTargetClusterHive
       -cluster: 'HADOOP_TARGET'
       -name: 'HIVE_TARGET'
       -host: 'localhost'
       -port: '10006'
       -user: 'hive_user'
       -password: 'hive_password'
       -createTunnel: 'true'
       -localPort: '10006'
       -remoteHost: 'hive_address'
       -remotePort: 'hive_port'
   /
   ```

   In the preceding example, replace *hive\$1user* and *hive\$1password* with the name of your Hive user and the password for this user.

   Next, replace *hive\$1address* with the default value `127.0.0.1` or with the NameNode IP address of your target Hive service. Next, replace *hive\$1port* with the port of your target Hive service.

1. Add your target HDFS service to the project.

   The following code example creates a tunnel for AWS SCT to work with your Apache HDFS service. This example uses the `HADOOP_TARGET` target cluster that you created before.

   ```
   AddTargetClusterHDFS
       -cluster: 'HADOOP_TARGET'
       -name: 'HDFS_TARGET'
       -host: 'localhost'
       -port: '8025'
       -user: 'hdfs_user'
       -password: 'hdfs_password'
       -createTunnel: 'true'
       -localPort: '8025'
       -remoteHost: 'hdfs_address'
       -remotePort: 'hdfs_port'
   /
   ```

   In the preceding example, replace *hdfs\$1user* and *hdfs\$1password* with the name of your HDFS user and the password for this user.

   Next, replace *hdfs\$1address* and *hdfs\$1port* with the private IP address and port of the NameNode of your target HDFS service.

1. Save your CLI script. Next, add mapping rules and migration commands. For more information, see [Migrating Hadoop workloads](big-data-hadoop.md).

# Connecting to Apache Oozie workflows with the AWS Schema Conversion Tool
<a name="CHAP_Source.Oozie"></a>

You can use the AWS SCT command line interface (CLI) to convert Apache Oozie workflows to AWS Step Functions. After you migrate your Apache Hadoop workloads to Amazon EMR, you can use a native service in the AWS Cloud to orchestrate your jobs. For more information, see [Connecting to Apache Hadoop](CHAP_Source.Hadoop.md).

AWS SCT converts your Oozie workflows to AWS Step Functions and uses AWS Lambda to emulate features that AWS Step Functions doesn't support. Also, AWS SCT converts your Oozie job properties to AWS Systems Manager.

To convert Apache Oozie workflows, make sure that you use AWS SCT version 1.0.671 or higher. Also, familiarize yourself with the command line interface of AWS SCT. For more information, see [CLI Reference for AWS Schema Conversion Tool](CHAP_Reference.md).

## Prerequisites for using Apache Oozie as a source
<a name="CHAP_Source.Oozie.Prerequisites"></a>

The following prerequisites are required to connect to Apache Oozie with the AWS SCT CLI.
+ Create an Amazon S3 bucket to store the definitions of state machines. You can use these definitions to configure your state machines. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
+ Create an AWS Identity and Access Management (IAM) role with the `AmazonS3FullAccess` policy. AWS SCT uses this IAM role to access your Amazon S3 bucket.
+ Take a note of your AWS secret key and AWS secret access key. For more information about AWS access keys, see [Managing access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) in the *IAM User Guide*.
+ Store your AWS credentials and the information about your Amazon S3 bucket in the AWS service profile in the global application settings. Then, AWS SCT uses this AWS service profile to work with your AWS resources. For more information, see [Managing Profiles in the AWS Schema Conversion Tool](CHAP_UserInterface.Profiles.md).

To work with your source Apache Oozie workflows, AWS SCT requires the specific structure of your source files. Each of your application folders must include the `job.properties` file. This file includes key-value pairs of your job properties. Also, each of your application folders must include the `workflow.xml` file. This file describes the action nodes and control flow nodes of your workflow.

## Connecting to Apache Oozie as a source
<a name="CHAP_Source.Oozie.Connecting"></a>

Use the following procedure to connect to your Apache Oozie source files.

**To connect to Apache Oozie in the AWS SCT CLI**

1. Create a new AWS SCT CLI script or edit an existing scenario template. For example, you can download and edit the `OozieConversionTemplate.scts` template. For more information, see [Getting CLI scenarios](CHAP_Reference.md#CHAP_Reference.Scenario).

1. Configure the AWS SCT application settings.

   The following code example saves the application settings and allows to store passwords in your project. You can use these saved settings in other projects.

   ```
   SetGlobalSettings
       -save: 'true'
       -settings: '{
           "store_password": "true"
       }'
   /
   ```

1. Create a new AWS SCT project.

   The following code example creates the `oozie` project in the `c:\sct` folder.

   ```
   CreateProject
       -name: 'oozie'
       -directory: 'c:\sct'
   /
   ```

1. Add the folder with your source Apache Oozie files to the project using the `AddSource` command. Make sure that you use the `APACHE_OOZIE` value for the `vendor` parameter. Also, provide values for the following required parameters: `name` and `mappingsFolder`.

   The following code example adds Apache Oozie as a source in your AWS SCT project. This example creates a source object with the name `OOZIE`. Use this object name to add mapping rules. After you run this code example, AWS SCT uses the `c:\oozie` folder to load your source files in the project.

   ```
   AddSource
       -name: 'OOZIE'
       -vendor: 'APACHE_OOZIE'
       -mappingsFolder: 'c:\oozie'
   /
   ```

   You can use this example and the following examples in Windows.

1. Connect to your source Apache Oozie files using the `ConnectSource` command. Use the name of your source object that you defined in the previous step.

   ```
   ConnectSource
       -name: 'OOZIE'
       -mappingsFolder: 'c:\oozie'
   /
   ```

1. Save your CLI script. Next, add the connection information for your AWS Step Functions service.

## Permissions for using AWS Lambda functions in the extension pack
<a name="CHAP_Source.Oozie.TargetPrerequisites"></a>

For the source functions that AWS Step Functions doesn't support, AWS SCT creates an extension pack. This extension pack includes AWS Lambda functions, which emulate your source functions.

To use this extension pack, create an AWS Identity and Access Management (IAM) role with the following permissions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "lambda",
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": [
                "arn:aws:lambda:*:498160209112:function:LoadParameterInitialState:*",
                "arn:aws:lambda:*:498160209112:function:EvaluateJSPELExpressions:*"
            ]
        },
        {
            "Sid": "emr",
            "Effect": "Allow",
            "Action": [
                "elasticmapreduce:DescribeStep",
                "elasticmapreduce:AddJobFlowSteps"
            ],
            "Resource": [
                "arn:aws:elasticmapreduce:*:498160209112:cluster/*"
            ]
        },
        {
            "Sid": "s3",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::*/*"
            ]
        }
    ]
}
```

------

To apply the extension pack, AWS SCT requires an IAM role with the following permissions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "iam:ListRolePolicies",
                "iam:CreateRole",
                "iam:TagRole",
                "iam:PutRolePolicy",
                "iam:DeleteRolePolicy",
                "iam:DeleteRole",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::111122223333:role/sct/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "iam:ListRolePolicies"
            ],
            "Resource": [
                "arn:aws:iam::111122223333:role/lambda_LoadParameterInitialStateRole",
                "arn:aws:iam::111122223333:role/lambda_EvaluateJSPELExpressionsRole",
                "arn:aws:iam::111122223333:role/stepFunctions_MigratedOozieWorkflowRole"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "lambda:GetFunction",
                "lambda:CreateFunction",
                "lambda:UpdateFunctionCode",
                "lambda:DeleteFunction"
            ],
            "Resource": [
                "arn:aws:lambda:*:111122223333:function:LoadParameterInitialState",
                "arn:aws:lambda:*:111122223333:function:EvaluateJSPELExpressions"
            ]
        }
    ]
}
```

------

## Connecting to AWS Step Functions as a target
<a name="CHAP_Source.Oozie.Target"></a>

Use the following procedure to connect to AWS Step Functions as a target.

**To connect to AWS Step Functions in the AWS SCT CLI**

1. Open your CLI script which includes the connection information for your Apache Oozie source files.

1. Add the information about your migration target in the AWS SCT project using the `AddTarget` command. Make sure that you use the `STEP_FUNCTIONS` value for the `vendor` parameter. Also, provide values for the following required parameters: `name` and `profile`.

   The following code example adds AWS Step Functions as a source in your AWS SCT project. This example creates a target object with the name `AWS_STEP_FUNCTIONS`. Use this object name when you create mapping rules. Also, this example uses an AWS SCT service profile that you created in the prerequisites step. Make sure that you replace *profile\$1name* with the name of your profile.

   ```
   AddTarget
       -name: 'AWS_STEP_FUNCTIONS'
       -vendor: 'STEP_FUNCTIONS'
       -profile: 'profile_name'
   /
   ```

   If you don't use the AWS service profile, make sure that you provide values for the following required parameters: `accessKey`, `secretKey`, `awsRegion`, and `s3Path`. Use these parameters to specify your AWS secret access key, AWS secret key, AWS Region, and the path to your Amazon S3 bucket.

1. Connect to AWS Step Functions using the `ConnectTarget` command. Use the name of your target object that you defined in the previous step.

   The following code example connects to the `AWS_STEP_FUNCTIONS` target object using your AWS service profile. Make sure that you replace *profile\$1name* with the name of your profile.

   ```
   ConnectTarget
       -name: 'AWS_STEP_FUNCTIONS'
       -profile: 'profile_name'
   /
   ```

1. Save your CLI script. Next, add mapping rules and migration commands. For more information, see [Converting Oozie workflows;](big-data-oozie.md).

# Connecting to Microsoft Azure SQL Databases with the AWS SCT
<a name="CHAP_Source.AzureSQL"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Azure SQL Database to the following targets: 
+ Amazon RDS for MySQL
+ Amazon Aurora MySQL-Compatible Edition
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition

**Topics**
+ [Privileges for Azure SQL Database as a source](#CHAP_Source.AzureSQL.Permissions)
+ [Connecting to Azure SQL Database as a source](#CHAP_Source.AzureSQL.Connecting)

## Privileges for Azure SQL Database as a source
<a name="CHAP_Source.AzureSQL.Permissions"></a>

The privileges required for Azure SQL Database as a source are as follows: 
+ VIEW DEFINITION 
+ VIEW DATABASE STATE 

Repeat the grant for each database whose schema you are converting. 

The privileges required for target MySQL and PostgreSQL databases are described in the following sections.
+ [Privileges for MySQL as a target database](CHAP_Source.SQLServer.ToMySQL.md#CHAP_Source.SQLServer.ToMySQL.ConfigureTarget) 
+ [Privileges for PostgreSQL as a target database](CHAP_Source.SQLServer.ToPostgreSQL.md#CHAP_Source.SQLServer.ToPostgreSQL.ConfigurePostgreSQL) 

## Connecting to Azure SQL Database as a source
<a name="CHAP_Source.AzureSQL.Connecting"></a>

Use the following procedure to connect to your Azure SQL Database source database with the AWS Schema Conversion Tool. 

**To connect to an Azure SQL Database source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Azure SQL Database**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Azure SQL Database source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.AzureSQL.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

# Connecting to IBM DB2 for z/OS Databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.DB2zOS"></a>

You can use AWS SCT to convert schemas, code objects, and application code from IBM Db2 for z/OS to the following targets.
+ Amazon RDS for MySQL
+ Amazon Aurora MySQL-Compatible Edition
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition

## Prerequisites for Db2 for z/OS as a source database
<a name="CHAP_Source.DB2zOS.Prerequisites"></a>

The IBM Db2 for z/OS version 12 function level 100 database version doesn't support most new capabilities of IBM Db2 for z/OS version 12. This database version provides support for fallback to Db2 version 11 and data sharing with Db2 version 11. To avoid the conversion of unsupported features of Db2 version 11, we recommend that you use an IBM Db2 for z/OS database function level 500 or higher as a source for AWS SCT.

You can use the following code example to check the version of your source IBM Db2 for z/OS database.

```
SELECT GETVARIABLE('SYSIBM.VERSION') as version FROM SYSIBM.SYSDUMMY1;
```

Make sure that this code returns version `DSN12015` or higher.

You can use the following code example to check the value of the `APPLICATION COMPATIBILITY` special register in your source IBM Db2 for z/OS database.

```
SELECT CURRENT APPLICATION COMPATIBILITY as version FROM SYSIBM.SYSDUMMY1;
```

Make sure that this code returns version `V12R1M500` or higher.

## Privileges for Db2 for z/OS as a source database
<a name="CHAP_Source.DB2zOS.Permissions"></a>

The privileges needed to connect to a Db2 for z/OS database and read system catalogs and tables are as follows:
+ SELECT ON SYSIBM.LOCATIONS
+ SELECT ON SYSIBM.SYSCHECKS
+ SELECT ON SYSIBM.SYSCOLUMNS
+ SELECT ON SYSIBM.SYSDATABASE
+ SELECT ON SYSIBM.SYSDATATYPES
+ SELECT ON SYSIBM.SYSDUMMY1
+ SELECT ON SYSIBM.SYSFOREIGNKEYS
+ SELECT ON SYSIBM.SYSINDEXES
+ SELECT ON SYSIBM.SYSKEYCOLUSE
+ SELECT ON SYSIBM.SYSKEYS
+ SELECT ON SYSIBM.SYSKEYTARGETS
+ SELECT ON SYSIBM.SYSJAROBJECTS
+ SELECT ON SYSIBM.SYSPACKAGE
+ SELECT ON SYSIBM.SYSPARMS
+ SELECT ON SYSIBM.SYSRELS
+ SELECT ON SYSIBM.SYSROUTINES
+ SELECT ON SYSIBM.SYSSEQUENCES
+ SELECT ON SYSIBM.SYSSEQUENCESDEP
+ SELECT ON SYSIBM.SYSSYNONYMS
+ SELECT ON SYSIBM.SYSTABCONST
+ SELECT ON SYSIBM.SYSTABLES
+ SELECT ON SYSIBM.SYSTABLESPACE
+ SELECT ON SYSIBM.SYSTRIGGERS
+ SELECT ON SYSIBM.SYSVARIABLES
+ SELECT ON SYSIBM.SYSVIEWS

To convert Db2 for z/OS tables to PostgreSQL partitioned tables, gather statistics on tablespaces and tables in your database using the `RUNSTATS` utility as shown following.

```
LISTDEF YOURLIST INCLUDE TABLESPACES DATABASE YOURDB 
RUNSTATS TABLESPACE
LIST YOURLIST
TABLE (ALL) INDEX (ALL KEYCARD)
UPDATE ALL
REPORT YES
SHRLEVEL REFERENCE
```

In the preceding example, replace the `YOURDB` placeholder with the name of the source database.

## Connecting to Db2 for z/OS as a source
<a name="CHAP_Source.DB2zOS.Connecting"></a>

Use the following procedure to connect to your Db2 for z/OS source database with AWS SCT.

**To connect to an IBM Db2 for z/OS source database**

1. In the AWS Schema Conversion Tool, choose **Add source**.

1. Choose **Db2 for z/OS**, then choose **Next**.

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the IBM Db2 for z/OS source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.DB2zOS.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database.

1. Choose **Connect** to connect to your source database.

## Privileges for MySQL as a target database
<a name="CHAP_Source.DB2zOS.ConfigureMySQL"></a>

The privileges required for MySQL as a target are as follows:
+ CREATE ON \$1.\$1
+ ALTER ON \$1.\$1
+ DROP ON \$1.\$1
+ INDEX ON \$1.\$1
+ REFERENCES ON \$1.\$1
+ SELECT ON \$1.\$1
+ CREATE VIEW ON \$1.\$1
+ SHOW VIEW ON \$1.\$1
+ TRIGGER ON \$1.\$1
+ CREATE ROUTINE ON \$1.\$1
+ ALTER ROUTINE ON \$1.\$1
+ EXECUTE ON \$1.\$1
+ SELECT ON mysql.proc
+ INSERT, UPDATE ON AWS\$1DB2ZOS\$1EXT.\$1
+ INSERT, UPDATE, DELETE ON AWS\$1DB2ZOS\$1EXT\$1DATA.\$1
+ CREATE TEMPORARY TABLES ON AWS\$1DB2ZOS\$1EXT\$1DATA.\$1

You can use the following code example to create a database user and grant the privileges.

```
CREATE USER 'user_name' IDENTIFIED BY 'your_password';
GRANT CREATE ON *.* TO 'user_name';
GRANT ALTER ON *.* TO 'user_name';
GRANT DROP ON *.* TO 'user_name';
GRANT INDEX ON *.* TO 'user_name';
GRANT REFERENCES ON *.* TO 'user_name';
GRANT SELECT ON *.* TO 'user_name';
GRANT CREATE VIEW ON *.* TO 'user_name';
GRANT SHOW VIEW ON *.* TO 'user_name';
GRANT TRIGGER ON *.* TO 'user_name';
GRANT CREATE ROUTINE ON *.* TO 'user_name';
GRANT ALTER ROUTINE ON *.* TO 'user_name';
GRANT EXECUTE ON *.* TO 'user_name';
GRANT SELECT ON mysql.proc TO 'user_name';
GRANT INSERT, UPDATE ON AWS_DB2ZOS_EXT.* TO 'user_name';
GRANT INSERT, UPDATE, DELETE ON AWS_DB2ZOS_EXT_DATA.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON AWS_DB2ZOS_EXT_DATA.* TO 'user_name';
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

To use Amazon RDS for MySQL as a target, set the `log_bin_trust_function_creators` parameter to true, and the `character_set_server` to `latin1`. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

To use Aurora MySQL as a target, set the `log_bin_trust_function_creators` parameter to true, and the `character_set_server` to `latin1`. Also, set the `lower_case_table_names` parameter to true. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

## Privileges for PostgreSQL as a target database
<a name="CHAP_Source.DB2zOS.ConfigurePostgreSQL"></a>

To use PostgreSQL as a target, AWS SCT requires the `CREATE ON DATABASE` privilege. Make sure that you grant this privilege for each target PostgreSQL database.

To use Amazon RDS for PostgreSQL as a target, AWS SCT requires the `rds_superuser` privilege.

To use the converted public synonyms, change the database default search path to `"$user", public_synonyms, public`.

You can use the following code example to create a database user and grant the privileges.

```
CREATE ROLE user_name LOGIN PASSWORD 'your_password';
GRANT CREATE ON DATABASE db_name TO user_name;
GRANT rds_superuser TO user_name;
ALTER DATABASE db_name SET SEARCH_PATH = "$user", public_synonyms, public;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *db\$1name* with the name of your target database. Finally, replace *your\$1password* with a secure password.

In PostgreSQL, only the schema owner or a `superuser` can drop a schema. The owner can drop a schema and all objects that this schema includes even if the owner of the schema doesn't own some of its objects.

When you use different users to convert and apply different schemas to your target database, you can get an error message when AWS SCT can't drop a schema. To avoid this error message, use the `superuser` role. 

## Db2 for z/OS to PostgreSQL conversion settings
<a name="CHAP_Source.DB2zOS.PostgreSQLConversionSettings"></a>

To edit Db2 for z/OS to PostgreSQL conversion settings, choose **Settings**, and then choose **Conversion settings**. From the upper list, choose **Db2 for z/OS**, and then choose **Db2 for z/OS – PostgreSQL** or **Db2 for z/OS – Amazon Aurora (PostgreSQL compatible)**. AWS SCT displays all available settings for IBM Db2 for z/OS to PostgreSQL conversion.

Db2 for z/OS to PostgreSQL conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To generate unique names for constraints in the target database.

  In PostgreSQL, all constraint names that you use must be unique. AWS SCT can generate unique names for constraints in the converted code by adding a prefix with the table name to the name of your constraint. To make sure that AWS SCT generates unique names for your constraints, select **Generate unique names for constraints**.
+ To keep the formatting of column names, expressions, and clauses in DML statements in the converted code.

  AWS SCT can keep the layout of column names, expressions, and clauses in DML statements in the similar position and order as in the source code. To do so, select **Yes** for **Keep the formatting of column names, expressions, and clauses in DML statements**.
+ To exclude table partitions from the conversion scope.

  AWS SCT can skip all partitions of a source table during the conversion. To do so, select **Exclude table partitions from the conversion scope**.
+ To use automatic partitioning for tables that are partitioned by growth.

  For data migration, AWS SCT can automatically partition all tables that are larger than the specified size. To use this option, select **Enforce partition of tables larger than**, and enter the tables size in gigabytes. Next, enter the number of partitions. AWS SCT considers the direct access storage device (DASD) size of your source database when you turn on this option.

  AWS SCT can determine the number of partitions automatically. To do so, select **Increase the number of partitions proportionally**, and enter the maximum number of partitions.
+ To return dynamic result sets as an array of values of the refcursor data type.

  AWS SCT can convert source procedures that return dynamic result sets into procedures which have an array of open refcursors as an additional output parameter. To do so, select **Use an array of refcursors to return all dynamic result sets**.
+ To specify the standard to use for the conversion of date and time values to string representations.

  AWS SCT can convert date and time values to string representations using one of the supported industry formats. To do so, select **Use string representations of date values** or **Use string representations of time values**. Next, choose one of the following standards.
  + International Standards Organization (ISO)
  + IBM European Standard (EUR)
  + IBM USA Standard (USA)
  + Japanese Industrial Standard Christian Era (JIS)

# ConnConnecting to IBM DB2 for Linux, UNIX, and Windows Databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.DB2LUW"></a>

You can use AWS SCT to convert schemas, code objects in the SQL language, and application code from IBM Db2 for Linux, Unix, and Windows (Db2 LUW) to the following targets.
+ Amazon RDS for MySQL
+ Amazon Aurora MySQL-Compatible Edition
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition
+ Amazon RDS for MariaDB

AWS SCT supports as a source Db2 LUW versions 9.1, 9.5, 9.7, 10.1, 10.5, 11.1, and 11.5.

## Privileges for Db2 LUW as a source
<a name="CHAP_Source.DB2LUW.Permissions"></a>

The privileges needed to connect to a Db2 LUW database, to check available privileges and read schema metadata for a source are as follows: 
+ Privilege needed to establish a connection:
  + CONNECT ON DATABASE
+ Privilege needed to run SQL statements:
  + EXECUTE ON PACKAGE NULLID.SYSSH200
+ Privileges needed to get instance-level information:
  + EXECUTE ON FUNCTION SYSPROC.ENV\$1GET\$1INST\$1INFO
  + SELECT ON SYSIBMADM.ENV\$1INST\$1INFO
  + SELECT ON SYSIBMADM.ENV\$1SYS\$1INFO
+ Privileges needed to check privileges granted through roles, groups, and authorities:
  + EXECUTE ON FUNCTION SYSPROC.AUTH\$1LIST\$1AUTHORITIES\$1FOR\$1AUTHID
  + EXECUTE ON FUNCTION SYSPROC.AUTH\$1LIST\$1GROUPS\$1FOR\$1AUTHID
  + EXECUTE ON FUNCTION SYSPROC.AUTH\$1LIST\$1ROLES\$1FOR\$1AUTHID
  + SELECT ON SYSIBMADM.PRIVILEGES
+ Privileges needed on system catalogs and tables:
  + SELECT ON SYSCAT.ATTRIBUTES
  + SELECT ON SYSCAT.CHECKS
  + SELECT ON SYSCAT.COLIDENTATTRIBUTES
  + SELECT ON SYSCAT.COLUMNS
  + SELECT ON SYSCAT.DATAPARTITIONEXPRESSION
  + SELECT ON SYSCAT.DATAPARTITIONS
  + SELECT ON SYSCAT.DATATYPEDEP
  + SELECT ON SYSCAT.DATATYPES
  + SELECT ON SYSCAT.HIERARCHIES
  + SELECT ON SYSCAT.INDEXCOLUSE
  + SELECT ON SYSCAT.INDEXES
  + SELECT ON SYSCAT.INDEXPARTITIONS
  + SELECT ON SYSCAT.KEYCOLUSE
  + SELECT ON SYSCAT.MODULEOBJECTS
  + SELECT ON SYSCAT.MODULES
  + SELECT ON SYSCAT.NICKNAMES
  + SELECT ON SYSCAT.PERIODS
  + SELECT ON SYSCAT.REFERENCES
  + SELECT ON SYSCAT.ROUTINEPARMS
  + SELECT ON SYSCAT.ROUTINES
  + SELECT ON SYSCAT.ROWFIELDS
  + SELECT ON SYSCAT.SCHEMATA
  + SELECT ON SYSCAT.SEQUENCES
  + SELECT ON SYSCAT.TABCONST
  + SELECT ON SYSCAT.TABLES
  + SELECT ON SYSCAT.TRIGGERS
  + SELECT ON SYSCAT.VARIABLEDEP
  + SELECT ON SYSCAT.VARIABLES
  + SELECT ON SYSCAT.VIEWS
  + SELECT ON SYSIBM.SYSDUMMY1
+  To run SQL statements, the user account needs a privilege to use at least one of the workloads enabled in the database. If none of the workloads are assigned to the user, ensure that the default user workload is accessible to the user:
  + USAGE ON WORKLOAD SYSDEFAULTUSERWORKLOAD

To run queries, you need to create system temporary tablespaces with page size 8K, 16K, and 32K, if they don't exist. To create the temporary tablespaces, run the following scripts.

```
CREATE BUFFERPOOL BP8K
  IMMEDIATE
  ALL DBPARTITIONNUMS
  SIZE AUTOMATIC
  NUMBLOCKPAGES 0
  PAGESIZE 8K;
  
CREATE SYSTEM TEMPORARY TABLESPACE TS_SYS_TEMP_8K 
  PAGESIZE 8192 
  BUFFERPOOL BP8K;
  
CREATE BUFFERPOOL BP16K
  IMMEDIATE
  ALL DBPARTITIONNUMS
  SIZE AUTOMATIC
  NUMBLOCKPAGES 0
  PAGESIZE 16K;
  
CREATE SYSTEM TEMPORARY TABLESPACE TS_SYS_TEMP_BP16K 
  PAGESIZE 16384 
  BUFFERPOOL BP16K;  
  
CREATE BUFFERPOOL BP32K
  IMMEDIATE
  ALL DBPARTITIONNUMS
  SIZE AUTOMATIC
  NUMBLOCKPAGES 0
  PAGESIZE 32K;
  
CREATE SYSTEM TEMPORARY TABLESPACE TS_SYS_TEMP_BP32K 
  PAGESIZE 32768 
  BUFFERPOOL BP32K;
```

## Connecting to Db2 LUW as a source
<a name="CHAP_Source.DB2LUW.Connecting"></a>

Use the following procedure to connect to your Db2 LUW source database with the AWS Schema Conversion Tool. 

**To connect to a Db2 LUW source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Db2 LUW**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the IBM Db2 LUW source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.DB2LUW.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

# Migrating from IBM DB2 for Linux, UNIX, and Windows to Amazon Relational Database Service for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition
<a name="CHAP_Source.DB2LUW.ToPostgreSQL"></a>

When you migrate IBM Db2 LUW to PostgreSQL, AWS SCT can convert various trigger statements used with Db2 LUW. These trigger statements include the following:
+ **Trigger events** – INSERT, DELETE, and UPDATE trigger events specify that the triggered action runs whenever the event is applied to the subject table or subject view. You can specify any combination of the INSERT, DELETE, and UPDATE events, but you can specify each event only once. AWS SCT supports single and multiple trigger events. For events, PostgreSQL has practically the same functionality. 
+ **Event OF COLUMN** – You can specify a column name from a base table. The trigger is activated only by the update of a column that is identified in the column-name list. PostgreSQL has the same functionality.
+ **Statement triggers** – These specify that the triggered action is applied only once for the whole statement. You can’t specify this type of trigger granularity for a BEFORE trigger or an INSTEAD OF trigger. If specified, an UPDATE or DELETE trigger is activated, even if no rows are affected. PostgreSQL also has this functionality and trigger declaration for statement triggers is identical for PostgreSQL and Db2 LUW.
+ **Referencing clauses** – These specify the correlation names for transition variables and the table names for transition tables. Correlation names identify a specific row in the set of rows affected by the triggering SQL operation. Table names identify the complete set of affected rows. Each row affected by a triggering SQL operation is available to the triggered action by qualifying columns with specified correlation-names. PostgreSQL doesn’t support this functionality, and only uses a NEW or OLD correlation name.
+ **INSTEAD OF triggers** – AWS SCT supports these.

## Converting Db2 LUW partitioned tables to PostgreSQL version 10 partitioned tables
<a name="CHAP_Source.DB2LUW.ToPostgreSQL.PartitionedTables"></a>

AWS SCT can convert Db2 LUW tables to partitioned tables in PostgreSQL 10. There are several restrictions when converting a Db2 LUW partitioned table to PostgreSQL:
+ You can create a partitioned table with a nullable column in Db2 LUW, and you can specify a partition to store NULL values. However, PostgreSQL doesn’t support NULL values for RANGE partitioning.
+ Db2 LUW can use an INCLUSIVE or EXCLUSIVE clause to set range boundary values. PostgreSQL only supports INCLUSIVE for a starting boundary and EXCLUSIVE for an ending boundary. The converted partition name is in the format <original\$1table\$1name>\$1<original\$1partition\$1name>.
+ You can create primary or unique keys for partitioned tables in Db2 LUW. PostgreSQL requires you to create primary or unique key for each partition directly. Primary or unique key constraints must be removed from the parent table. The converted key name is in the format <original\$1key\$1name>\$1<original\$1partition \$1name>.
+ You can create a foreign key constraint from and to a partitioned table in Db2 LUW. However, PostgreSQL doesn’t support foreign keys references in partitioned tables. PostgreSQL also doesn’t support foreign key references from a partitioned table to another table.
+ You can create an index on a partitioned table in Db2 LUW. However, PostgreSQL requires you to create an index for each partition directly. Indexes must be removed from the parent table. The converted index name is in the format <original\$1index\$1name>\$1<original\$1partition\$1name>.
+ You must define row triggers on individual partitions, not on the partitioned table. Triggers must be removed from the parent table. The converted trigger name is in the format <original\$1trigger\$1name>\$1<original\$1partition\$1name>.

## Privileges for PostgreSQL as a target
<a name="CHAP_Source.DB2LUW.ToPostgreSQL.ConfigureTarget"></a>

To use PostgreSQL as a target, AWS SCT requires the `CREATE ON DATABASE` privilege. Make sure that you grant this privilege for each target PostgreSQL database.

To use the converted public synonyms, change the database default search path to `"$user", public_synonyms, public`.

You can use the following code example to create a database user and grant the privileges.

```
CREATE ROLE user_name LOGIN PASSWORD 'your_password';
GRANT CREATE ON DATABASE db_name TO user_name;
ALTER DATABASE db_name SET SEARCH_PATH = "$user", public_synonyms, public;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *db\$1name* with the name of your target database. Finally, replace *your\$1password* with a secure password.

In PostgreSQL, only the schema owner or a `superuser` can drop a schema. The owner can drop a schema and all objects that this schema includes even if the owner of the schema doesn't own some of its objects.

When you use different users to convert and apply different schemas to your target database, you can get an error message when AWS SCT can't drop a schema. To avoid this error message, use the `superuser` role. 

# Migrating from IBM DB2 for Linux, UNIX, and Windows to Amazon RDS for MySQL or Amazon Aurora MySQL
<a name="CHAP_Source.DB2LUW.ToMySQL"></a>

When you convert an IBM Db2 LUW database to RDS for MySQL or Amazon Aurora MySQL, be aware of the following.

## Privileges for MySQL as a target
<a name="CHAP_Source.DB2LUW.ToMySQL.ConfigureTarget"></a>

The privileges required for MySQL as a target are as follows:
+ CREATE ON \$1.\$1
+ ALTER ON \$1.\$1
+ DROP ON \$1.\$1
+ INDEX ON \$1.\$1
+ REFERENCES ON \$1.\$1
+ SELECT ON \$1.\$1
+ CREATE VIEW ON \$1.\$1
+ SHOW VIEW ON \$1.\$1
+ TRIGGER ON \$1.\$1
+ CREATE ROUTINE ON \$1.\$1
+ ALTER ROUTINE ON \$1.\$1
+ EXECUTE ON \$1.\$1
+ SELECT ON mysql.proc
+ INSERT, UPDATE ON AWS\$1DB2\$1EXT.\$1
+ INSERT, UPDATE, DELETE ON AWS\$1DB2\$1EXT\$1DATA.\$1
+ CREATE TEMPORARY TABLES ON AWS\$1DB2\$1EXT\$1DATA.\$1

You can use the following code example to create a database user and grant the privileges.

```
CREATE USER 'user_name' IDENTIFIED BY 'your_password';
GRANT CREATE ON *.* TO 'user_name';
GRANT ALTER ON *.* TO 'user_name';
GRANT DROP ON *.* TO 'user_name';
GRANT INDEX ON *.* TO 'user_name';
GRANT REFERENCES ON *.* TO 'user_name';
GRANT SELECT ON *.* TO 'user_name';
GRANT CREATE VIEW ON *.* TO 'user_name';
GRANT SHOW VIEW ON *.* TO 'user_name';
GRANT TRIGGER ON *.* TO 'user_name';
GRANT CREATE ROUTINE ON *.* TO 'user_name';
GRANT ALTER ROUTINE ON *.* TO 'user_name';
GRANT EXECUTE ON *.* TO 'user_name';
GRANT SELECT ON mysql.proc TO 'user_name';
GRANT INSERT, UPDATE ON AWS_DB2_EXT.* TO 'user_name';
GRANT INSERT, UPDATE, DELETE ON AWS_DB2_EXT_DATA.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON AWS_DB2_EXT_DATA.* TO 'user_name';
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

To use Amazon RDS for MySQL or Aurora MySQL as a target, set the `lower_case_table_names` parameter to `1`. This value means that the MySQL server handles identifiers of such object names as tables, indexes, triggers, and databases as case insensitive. If you have turned on binary logging in your target instance, then set the `log_bin_trust_function_creators` parameter to `1`. In this case, you don't need to use the `DETERMINISTIC`, `READS SQL DATA` or `NO SQL` characteristics to create stored functions. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

# Using MySQL as a source for AWS SCT
<a name="CHAP_Source.MySQL"></a>

You can use AWS SCT to convert schemas, database code objects, and application code from MySQL to the following targets: 
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition
+ Amazon RDS for MySQL

For more information, see the following sections:

**Topics**
+ [Privileges for MySQL as a source database](#CHAP_Source.MySQL.Permissions)
+ [Connecting to MySQL as a source](#CHAP_Source.MySQL.Connecting)
+ [Privileges for PostgreSQL as a target database](#CHAP_Source.MySQL.ConfigurePostgreSQL)

## Privileges for MySQL as a source database
<a name="CHAP_Source.MySQL.Permissions"></a>

The privileges required for MySQL as a source are as follows: 
+ SELECT ON \$1.\$1 
+ SHOW VIEW ON \$1.\$1 

## Connecting to MySQL as a source
<a name="CHAP_Source.MySQL.Connecting"></a>

Use the following procedure to connect to your MySQL source database with the AWS Schema Conversion Tool. 

**To connect to a MySQL source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **MySQL**, then choose **Next**.

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the MySQL source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.MySQL.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Privileges for PostgreSQL as a target database
<a name="CHAP_Source.MySQL.ConfigurePostgreSQL"></a>

To use PostgreSQL as a target, AWS SCT requires the `CREATE ON DATABASE` privilege. Make sure that you grant this privilege for each target PostgreSQL database.

To use the converted public synonyms, change the database default search path to `"$user", public_synonyms, public`.

You can use the following code example to create a database user and grant the privileges.

```
CREATE ROLE user_name LOGIN PASSWORD 'your_password';
GRANT CREATE ON DATABASE db_name TO user_name;
ALTER DATABASE db_name SET SEARCH_PATH = "$user", public_synonyms, public;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *db\$1name* with the name of your target database. Finally, replace *your\$1password* with a secure password.

In PostgreSQL, only the schema owner or a `superuser` can drop a schema. The owner can drop a schema and all objects that this schema includes even if the owner of the schema doesn't own some of its objects.

When you use different users to convert and apply different schemas to your target database, you can get an error message when AWS SCT can't drop a schema. To avoid this error message, use the `superuser` role. 

# Connecting to Oracle Databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.Oracle"></a>

You can use AWS SCT to convert schemas, database code objects, and application code from Oracle Database to the following targets: 
+ Amazon RDS for MySQL
+ Amazon Aurora MySQL-Compatible Edition
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition
+ Amazon RDS for Oracle
+ Amazon RDS for MariaDB

When the source is an Oracle database, comments can be converted to the appropriate format in, for example, a PostgreSQL database. AWS SCT can convert comments on tables, views, and columns. Comments can include apostrophes; AWS SCT doubles the apostrophes when converting SQL statements, just as it does for string literals.

For more information, see the following.

**Topics**
+ [Privileges for Oracle as a source](#CHAP_Source.Oracle.Permissions)
+ [Connecting to Oracle as a source](#CHAP_Source.Oracle.Connecting)
+ [Migrating from Oracle to Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL with AWS Schema Conversion Tool](CHAP_Source.Oracle.ToPostgreSQL.md)
+ [Migrating from Oracle to Amazon RDS for MySQL or Amazon Aurora MySQL with the AWS Schema Conversion Tool](CHAP_Source.Oracle.ToMySQL.md)
+ [Migrating from Oracle Database to Amazon RDS for Oracle with AWS Schema Conversion Tool](CHAP_Source.Oracle.ToRDSOracle.md)

## Privileges for Oracle as a source
<a name="CHAP_Source.Oracle.Permissions"></a>

The privileges required for Oracle as a source are as follows: 
+ CONNECT 
+ SELECT\$1CATALOG\$1ROLE 
+ SELECT ANY DICTIONARY 
+ SELECT ON SYS.ARGUMENT\$1

## Connecting to Oracle as a source
<a name="CHAP_Source.Oracle.Connecting"></a>

Use the following procedure to connect to your Oracle source database with the AWS Schema Conversion Tool. 

**To connect to an Oracle source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Oracle**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Oracle source database connection information manually, use the following instructions:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

# Migrating from Oracle to Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL with AWS Schema Conversion Tool
<a name="CHAP_Source.Oracle.ToPostgreSQL"></a>

When you convert an Oracle database to RDS for PostgreSQL or Amazon Aurora PostgreSQL, be aware of the following.

**Topics**
+ [Privileges for PostgreSQL as a target database](#CHAP_Source.Oracle.ToPostgreSQL.ConfigureTarget)
+ [Oracle to PostgreSQL conversion settings](#CHAP_Source.Oracle.ToPostgreSQL.ConversionSettings)
+ [Converting Oracle sequences](#CHAP_Source.Oracle.ToPostgreSQL.ConvertSequences)
+ [Converting Oracle ROWID](#CHAP_Source.Oracle.ToPostgreSQL.ConvertRowID)
+ [Converting Oracle dynamic SQL](#CHAP_Source.Oracle.ToPostgreSQL.DynamicSQL)
+ [Converting Oracle partitions](#CHAP_Source.Oracle.ToPostgreSQL.PG10Partitioning)

When converting Oracle system objects to PostgreSQL, AWS SCT performs conversions as shown in the following table.


| Oracle system object | Description | Converted PostgreSQL object | 
| --- | --- | --- | 
| V\$1VERSION  | Displays version numbers of core library components in the Oracle Database | aws\$1oracle\$1ext.v\$1version | 
| V\$1INSTANCE | A view that shows the state of the current instance. | aws\$1oracle\$1ext.v\$1instance | 

You can use AWS SCT to convert Oracle SQL\$1Plus files to psql, which is a terminal-based front-end to PostgreSQL. For more information, see [Converting application SQL using AWS SCT](CHAP_Converting.App.md).

## Privileges for PostgreSQL as a target database
<a name="CHAP_Source.Oracle.ToPostgreSQL.ConfigureTarget"></a>

To use PostgreSQL as a target, AWS SCT requires the `CREATE ON DATABASE` privilege. Make sure that you grant this privilege for each target PostgreSQL database.

To use the converted public synonyms, change the database default search path to `"$user", public_synonyms, public`.

You can use the following code example to create a database user and grant the privileges.

```
CREATE ROLE user_name LOGIN PASSWORD 'your_password';
GRANT CREATE ON DATABASE db_name TO user_name;
ALTER DATABASE db_name SET SEARCH_PATH = "$user", public_synonyms, public;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *db\$1name* with the name of your target database. Finally, replace *your\$1password* with a secure password.

To use Amazon RDS for PostgreSQL as a target, AWS SCT requires the `rds_superuser` privilege.

In PostgreSQL, only the schema owner or a `superuser` can drop a schema. The owner can drop a schema and all objects that this schema includes even if the owner of the schema doesn't own some of its objects.

When you use different users to convert and apply different schemas to your target database, you can get an error message when AWS SCT can't drop a schema. To avoid this error message, use the `superuser` role. 

## Oracle to PostgreSQL conversion settings
<a name="CHAP_Source.Oracle.ToPostgreSQL.ConversionSettings"></a>

To edit Oracle to PostgreSQL conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Oracle**, and then choose **Oracle – PostgreSQL**. AWS SCT displays all available settings for Oracle to PostgreSQL conversion.

Oracle to PostgreSQL conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To allow AWS SCT to convert Oracle materialized views to tables or materialized views on PostgreSQL. For **Materialized view conversion as**, choose how to convert your source materialized views.
+ To work with your source Oracle code when it includes the `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions with parameters that PostgreSQL doesn't support. By default, AWS SCT emulates the usage of these parameters in the converted code.

  When your source Oracle code includes only parameters that PostgreSQL supports, you can use native PostgreSQL `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions. In this case, the converted code works faster. To include only these parameters, select the following values:
  + **Function TO\$1CHAR() does not use Oracle specific formatting strings**
  + **Function TO\$1DATE() does not use Oracle specific formatting strings**
  + **Function TO\$1NUMBER() does not use Oracle specific formatting strings**
+ To address when your source Oracle database stores only integer values in the primary or foreign key columns of the `NUMBER` data type, AWS SCT can convert these columns to the `BIGINT` data type. This approach improves the performance of your converted code. To take this approach, select **Convert NUMBER primary / foreign key columns to BIGINT ones**. Make sure that your source doesn't include floating point values in these columns to avoid data loss.
+ To skip deactivated triggers and constraints in your source code. To do so, choose **Ignore disabled triggers and constraints**.
+ To use AWS SCT to convert string variables that are called as dynamic SQL. Your database code can change the values of these string variables. To make sure that AWS SCT always converts the latest value of this string variable, select **Convert the dynamic SQL code that is created in called routines**.
+ To address that PostgreSQL version 10 and earlier don't support procedures. If you or your users aren't familiar with using procedures in PostgreSQL, AWS SCT can convert Oracle procedures to PostgreSQL functions. To do so, select **Convert procedures to functions**.
+ To see additional information about the occurred action items. To do so, you can add specific functions to the extension pack by selecting **Add on exception raise block for migration issues with the next severity levels**. Then choose severity levels to raise user-defined exceptions.
+ To work with a source Oracle database that might include constraints with the automatically generated names. If your source code uses these names, make sure that you select **Convert the system generated constraint names using the source original names**. If your source code uses these constraints but doesn't use their names, clear this option to increase the conversion speed.
+ To address whether your database and applications run in different time zones. By default, AWS SCT emulates time zones in the converted code. However, you don't need this emulation when your database and applications use the same time zone. In this case, select **Time zone on the client side matches the time zone on server**.
+ To address whether your source and target databases run in different time zones. If they do, the function that emulates the `SYSDATE` built-in Oracle function returns different values compared to the source function. To make sure that your source and target functions return the same values, choose **Set default time zone for SYSDATE emulation**.
+ To use the functions from the orafce extension in your converted code. To do so, for **Use orafce implementation**, select the functions to use. For more information about orafce, see [orafce](https://github.com/orafce/orafce) on GitHub.

## Converting Oracle sequences
<a name="CHAP_Source.Oracle.ToPostgreSQL.ConvertSequences"></a>

AWS SCT converts sequences from Oracle to PostgreSQL. If you use sequences to maintain integrity constraints, make sure that the new values of a migrated sequence don't overlap the existing values.

**To populate converted sequences with the last value from the source database**

1. Open your AWS SCT project with Oracle as the source.

1. Choose **Settings**, and then choose **Conversion settings**. 

1. From the upper list, choose **Oracle**, and then choose **Oracle – PostgreSQL**. AWS SCT displays all available settings for Oracle to PostgreSQL conversion. 

1. Choose **Populate converted sequences with the last value generated on the source side**.

1. Choose **OK** to save the settings and close the **Conversion settings** dialog box. 

## Converting Oracle ROWID
<a name="CHAP_Source.Oracle.ToPostgreSQL.ConvertRowID"></a>

 In an Oracle database, the ROWID pseudocolumn contains the address of the table row. The ROWID pseudocolumn is unique to Oracle, so AWS SCT converts the ROWID pseudocolumn to a data column on PostgreSQL. By using this conversion, you can keep the ROWID information. 

When converting the ROWID pseudocolumn, AWS SCT can create a data column with the `bigint` data type. If no primary key exists, AWS SCT sets the ROWID column as the primary key. If a primary key exists, AWS SCT sets the ROWID column with a unique constraint.

If your source database code includes operations with ROWID, which you can't run using a numeric data type, AWS SCT can create a data column with the `character varying` data type.

**To create a data column for Oracle ROWID for a project**

1. Open your AWS SCT project with Oracle as the source.

1. Choose **Settings**, and then choose **Conversion settings**. 

1. From the upper list, choose **Oracle**, and then choose **Oracle – PostgreSQL**. AWS SCT displays all available settings for Oracle to PostgreSQL conversion. 

1. For **Generate row ID**, do one of the following: 
   + Choose **Generate as identity** to create a numeric data column.
   + Choose **Generate as character domain type** to create a character data column.

1. Choose **OK** to save the settings and close the **Conversion settings** dialog box. 

## Converting Oracle dynamic SQL
<a name="CHAP_Source.Oracle.ToPostgreSQL.DynamicSQL"></a>

 Oracle provides two ways to implement dynamic SQL: using an EXECUTE IMMEDIATE statement or calling procedures in the DBMS\$1SQL package. If your source Oracle database includes objects with dynamic SQL, use AWS SCT to convert Oracle dynamic SQL statements to PostgreSQL.

**To convert Oracle dynamic SQL to PostgreSQL**

1. Open your AWS SCT project with Oracle as the source.

1. Choose a database object that uses dynamic SQL in the Oracle source tree view.

1. Open the context (right-click) menu for the object, choose **Convert schema**, and agree to replace the objects if they exist. The following screenshot shows the converted procedure below the Oracle procedure with dynamic SQL.  
![\[Dynamic SQL conversion\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/images/dynamicsql1.png)

## Converting Oracle partitions
<a name="CHAP_Source.Oracle.ToPostgreSQL.PG10Partitioning"></a>

AWS SCT currently supports the following partitioning methods: 
+ Range
+ List
+ Multicolumn range
+ Hash
+ Composite (list-list, range-list, list-range, list-hash, range-hash, hash-hash)

# Migrating from Oracle to Amazon RDS for MySQL or Amazon Aurora MySQL with the AWS Schema Conversion Tool
<a name="CHAP_Source.Oracle.ToMySQL"></a>

To emulate Oracle database functions in your converted MySQL code, use the Oracle to MySQL extension pack in AWS SCT. For more information about extension packs, see [Using extension packs with AWS Schema Conversion Tool](CHAP_ExtensionPack.md). 

**Topics**
+ [Privileges for MySQL as a target database](#CHAP_Source.Oracle.ToMySQL.ConfigureTarget)
+ [Oracle to MySQL conversion settings](#CHAP_Source.Oracle.ToMySQL.ConversionSettings)
+ [Migration considerations](#CHAP_Source.Oracle.ToMySQL.MigrationConsiderations)
+ [Converting the WITH statement in Oracle to RDS for MySQL or Amazon Aurora MySQL](#CHAP_Source.Oracle.ToMySQL.With)

## Privileges for MySQL as a target database
<a name="CHAP_Source.Oracle.ToMySQL.ConfigureTarget"></a>

The privileges required for MySQL as a target are as follows:
+ CREATE ON \$1.\$1
+ ALTER ON \$1.\$1
+ DROP ON \$1.\$1
+ INDEX ON \$1.\$1
+ REFERENCES ON \$1.\$1
+ SELECT ON \$1.\$1
+ CREATE VIEW ON \$1.\$1
+ SHOW VIEW ON \$1.\$1
+ TRIGGER ON \$1.\$1
+ CREATE ROUTINE ON \$1.\$1
+ ALTER ROUTINE ON \$1.\$1
+ EXECUTE ON \$1.\$1
+ CREATE TEMPORARY TABLES ON \$1.\$1
+ AWS\$1LAMBDA\$1ACCESS
+ INSERT, UPDATE ON AWS\$1ORACLE\$1EXT.\$1
+ INSERT, UPDATE, DELETE ON AWS\$1ORACLE\$1EXT\$1DATA.\$1

If you use a MySQL database version 5.7 or lower as a target, then grant the INVOKE LAMBDA \$1.\$1 permission instead of AWS\$1LAMBDA\$1ACCESS. For MySQL databases version 8.0 and higher, grant the AWS\$1LAMBDA\$1ACCESS permission.

You can use the following code example to create a database user and grant the privileges.

```
CREATE USER 'user_name' IDENTIFIED BY 'your_password';
GRANT CREATE ON *.* TO 'user_name';
GRANT ALTER ON *.* TO 'user_name';
GRANT DROP ON *.* TO 'user_name';
GRANT INDEX ON *.* TO 'user_name';
GRANT REFERENCES ON *.* TO 'user_name';
GRANT SELECT ON *.* TO 'user_name';
GRANT CREATE VIEW ON *.* TO 'user_name';
GRANT SHOW VIEW ON *.* TO 'user_name';
GRANT TRIGGER ON *.* TO 'user_name';
GRANT CREATE ROUTINE ON *.* TO 'user_name';
GRANT ALTER ROUTINE ON *.* TO 'user_name';
GRANT EXECUTE ON *.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON *.* TO 'user_name';
GRANT AWS_LAMBDA_ACCESS TO 'user_name';
GRANT INSERT, UPDATE ON AWS_ORACLE_EXT.* TO 'user_name';
GRANT INSERT, UPDATE, DELETE ON AWS_ORACLE_EXT_DATA.* TO 'user_name';
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

If you use a MySQL database version 5.7 or lower as a target, then use `GRANT INVOKE LAMBDA ON *.* TO 'user_name'` instead of `GRANT AWS_LAMBDA_ACCESS TO 'user_name'`.

To use Amazon RDS for MySQL or Aurora MySQL as a target, set the `lower_case_table_names` parameter to `1`. This value means that the MySQL server handles identifiers of such object names as tables, indexes, triggers, and databases as case insensitive. If you have turned on binary logging in your target instance, then set the `log_bin_trust_function_creators` parameter to `1`. In this case, you don't need to use the `DETERMINISTIC`, `READS SQL DATA` or `NO SQL` characteristics to create stored functions. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

## Oracle to MySQL conversion settings
<a name="CHAP_Source.Oracle.ToMySQL.ConversionSettings"></a>

To edit Oracle to MySQL conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Oracle**, and then choose **Oracle – MySQL**. AWS SCT displays all available settings for Oracle to MySQL conversion.

Oracle to MySQL conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To address that your source Oracle database can use the `ROWID` pseudocolumn but MySQL doesn't support similar functionality. AWS SCT can emulate the `ROWID` pseudocolumn in the converted code. To do so, choose **Generate as identity** for **Generate row ID?**.

  If your source Oracle code doesn't use the `ROWID` pseudocolumn, choose **Don't generate** for **Generate row ID?** In this case, the converted code works faster.
+ To work with your source Oracle code when it includes the `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions with parameters that MySQL doesn't support. By default, AWS SCT emulates the usage of these parameters in the converted code.

  When your source Oracle code includes only parameters that PostgreSQL supports, you can use native MySQL `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` functions. In this case, the converted code works faster. To include only these parameters, select the following values:
  + **Function TO\$1CHAR() does not use Oracle specific formatting strings**
  + **Function TO\$1DATE() does not use Oracle specific formatting strings**
  + **Function TO\$1NUMBER() does not use Oracle specific formatting strings**
+ To addess whether your database and applications run in different time zones. By default, AWS SCT emulates time zones in the converted code. However, you don't need this emulation when your database and applications use the same time zone. In this case, select **Time zone on the client side matches the time zone on server**.

## Migration considerations
<a name="CHAP_Source.Oracle.ToMySQL.MigrationConsiderations"></a>

When you convert Oracle to RDS for MySQL or Aurora MySQL, to change the order that statements run in, you can use a `GOTO` statement and a label. Any PL/SQL statements that follow a `GOTO` statement are skipped, and processing continues at the label. You can use `GOTO` statements and labels anywhere within a procedure, batch, or statement block. You can also next GOTO statements.

MySQL doesn't use `GOTO` statements. When AWS SCT converts code that contains a `GOTO` statement, it converts the statement to use a `BEGIN…END` or `LOOP…END LOOP` statement. 

You can find examples of how AWS SCT converts `GOTO` statements in the table following.


| Oracle statement | MySQL statement | 
| --- | --- | 
|  <pre>BEGIN<br />   ....<br />   statement1;<br />   ....<br />   GOTO label1;<br />   statement2;<br />   ....<br />   label1:<br />   Statement3;<br />   ....<br />END<br /></pre>  |  <pre>BEGIN<br /> label1:<br /> BEGIN<br />   ....<br />   statement1;<br />   ....<br />   LEAVE label1;<br />   statement2;<br />   ....<br /> END;<br />   Statement3;<br />   ....<br />END<br /></pre>  | 
|  <pre>BEGIN<br />   ....<br />   statement1;<br />   ....<br />   label1:<br />   statement2;<br />   ....<br />   GOTO label1;<br />   statement3;<br />   ....<br />   statement4;<br />   ....<br />END<br /></pre>  |  <pre>BEGIN<br />   ....<br />   statement1;<br />   ....<br />   label1:<br />   LOOP<br />    statement2;<br />    ....<br />    ITERATE label1;<br />    LEAVE label1;<br />   END LOOP; <br />    statement3;<br />    ....<br />    statement4;<br />    ....<br />END<br /></pre>  | 
|  <pre>BEGIN<br />   ....<br />   statement1;<br />   ....<br />   label1:<br />   statement2;<br />   ....<br />   statement3;<br />   ....<br />   statement4;<br />   ....<br />END<br /></pre>  |  <pre>BEGIN<br />   ....<br />   statement1;<br />   ....<br />   label1:<br />   BEGIN<br />    statement2;<br />    ....    <br />    statement3;<br />    ....<br />    statement4;<br />    ....    <br />   END; <br />END<br /></pre>  | 

## Converting the WITH statement in Oracle to RDS for MySQL or Amazon Aurora MySQL
<a name="CHAP_Source.Oracle.ToMySQL.With"></a>

You use the WITH clause (subquery\$1factoring) in Oracle to assign a name (query\$1name) to a subquery block. You can then reference the subquery block multiple places in the query by specifying query\$1name. If a subquery block doesn't contain links or parameters (local, procedure, function, package), then AWS SCT converts the clause to a view or a temporary table. 

The advantage of converting the clause to a temporary table is that repeated references to the subquery might be more efficient. The greater efficiency is because the data is easily retrieved from the temporary table rather than being required by each reference. You can emulate this by using additional views or a temporary table. The view name uses the format `<procedure_name>$<subselect_alias>`.

You can find examples in the table following. 


| Oracle statement | MySQL statement | 
| --- | --- | 
|  <pre>CREATE PROCEDURE <br /> TEST_ORA_PG.P_WITH_SELECT_VARIABLE_01<br />     (p_state IN NUMBER)<br />AS<br />  l_dept_id NUMBER := 1; <br />BEGIN<br />FOR cur IN  <br />           (WITH dept_empl(id, name, surname, <br />              lastname, state, dept_id)<br />              AS<br />                  (<br />                    SELECT id, name, surname,  <br />                     lastname, state, dept_id <br />                      FROM test_ora_pg.dept_employees<br />                     WHERE state = p_state AND <br />                       dept_id = l_dept_id)<br />            SELECT id,state   <br />              FROM dept_empl<br />            ORDER BY id)  LOOP<br />  NULL;<br />END LOOP;<br /></pre>  |  <pre>CREATE PROCEDURE test_ora_pg.P_WITH_SELECT_VARIABLE_01(IN par_P_STATE DOUBLE)<br />BEGIN<br />    DECLARE var_l_dept_id DOUBLE DEFAULT 1;<br />    DECLARE var$id VARCHAR (8000);<br />    DECLARE var$state VARCHAR (8000);<br />    DECLARE done INT DEFAULT FALSE;<br />    DECLARE cur CURSOR FOR SELECT<br />        ID, STATE<br />        FROM (SELECT<br />            ID, NAME, SURNAME, LASTNAME, STATE, DEPT_ID<br />            FROM TEST_ORA_PG.DEPT_EMPLOYEES<br />            WHERE STATE = par_p_state AND DEPT_ID = var_l_dept_id) AS dept_empl<br />        ORDER BY ID;<br />    DECLARE CONTINUE HANDLER FOR NOT FOUND<br />        SET done := TRUE;<br />    OPEN cur;<br /><br />    read_label:<br />    LOOP<br />        FETCH cur INTO var$id, var$state;<br /><br />        IF done THEN<br />            LEAVE read_label;<br />        END IF;<br /><br />        BEGIN<br />        END;<br />    END LOOP;<br />    CLOSE cur;<br />END;<br /></pre>  | 
|  <pre>CREATE PROCEDURE <br /> TEST_ORA_PG.P_WITH_SELECT_REGULAR_MULT_01<br />AS    <br />BEGIN<br /><br /> FOR cur IN  (<br />               WITH dept_empl AS<br />                   (<br />                        SELECT id, name, surname, <br />                         lastname, state, dept_id <br />                          FROM test_ora_pg.dept_employees<br />                         WHERE state = 1),<br />                    dept AS <br />                   (SELECT id deptid, parent_id, <br />                      name deptname<br />                      FROM test_ora_pg.department                <br />                   )<br />                SELECT dept_empl.*,dept.*          <br />                 FROM dept_empl, dept<br />                 WHERE dept_empl.dept_id = dept.deptid<br />              ) LOOP<br />              NULL;<br />            END LOOP;<br /></pre>  |  <pre>CREATE VIEW TEST_ORA_PG.`P_WITH_SELECT_REGULAR_MULT_01$dept_empl<br /> `(id, name, surname, lastname, state, dept_id)<br />AS<br />(SELECT id, name, surname, lastname, state, dept_id <br />   FROM test_ora_pg.dept_employees<br />  WHERE state = 1);<br />  <br />CREATE VIEW TEST_ORA_PG.`P_WITH_SELECT_REGULAR_MULT_01$dept<br /> `(deptid, parent_id,deptname)<br />AS<br />(SELECT id deptid, parent_id, name deptname<br />   FROM test_ora_pg.department);  <br /><br /><br />CREATE PROCEDURE test_ora_pg.P_WITH_SELECT_REGULAR_MULT_01()<br />BEGIN<br />    DECLARE var$ID DOUBLE;<br />    DECLARE var$NAME VARCHAR (30);<br />    DECLARE var$SURNAME VARCHAR (30);<br />    DECLARE var$LASTNAME VARCHAR (30);<br />    DECLARE var$STATE DOUBLE;<br />    DECLARE var$DEPT_ID DOUBLE;<br />    DECLARE var$deptid DOUBLE;<br />    DECLARE var$PARENT_ID DOUBLE;<br />    DECLARE var$deptname VARCHAR (200);<br />    DECLARE done INT DEFAULT FALSE;<br />    DECLARE cur CURSOR FOR SELECT<br />        dept_empl.*, dept.*<br />        FROM TEST_ORA_PG.`P_WITH_SELECT_REGULAR_MULT_01$dept_empl<br />          ` AS dept_empl,<br />             TEST_ORA_PG.`P_WITH_SELECT_REGULAR_MULT_01$dept<br />          ` AS dept<br />        WHERE dept_empl.DEPT_ID = dept.DEPTID;<br />    DECLARE CONTINUE HANDLER FOR NOT FOUND<br />        SET done := TRUE;<br />    OPEN cur;<br /><br />    read_label:<br />    LOOP<br />    FETCH cur INTO var$ID, var$NAME, var$SURNAME, <br />     var$LASTNAME, var$STATE, var$DEPT_ID, var$deptid, <br />     var$PARENT_ID, var$deptname;<br /><br />        IF done THEN<br />            LEAVE read_label;<br />        END IF;<br /><br />        BEGIN<br />        END;<br />    END LOOP;<br />    CLOSE cur;<br />END;<br /><br />call test_ora_pg.P_WITH_SELECT_REGULAR_MULT_01()<br /></pre>  | 
|  <pre>CREATE PROCEDURE <br />  TEST_ORA_PG.P_WITH_SELECT_VAR_CROSS_02(p_state IN NUMBER)<br />AS    <br />   l_dept_id NUMBER := 10;<br />BEGIN<br /> FOR cur IN  (<br />               WITH emp AS              <br />                    (SELECT id, name, surname, <br />                      lastname, state, dept_id <br />                       FROM test_ora_pg.dept_employees<br />                      WHERE dept_id > 10                 <br />                    ),<br />                    active_emp AS<br />                    (<br />                      SELECT id<br />                        FROM emp<br />                       WHERE emp.state = p_state <br />                    )<br />                    <br />                SELECT *          <br />                  FROM active_emp                 <br />              ) LOOP<br />         NULL;<br />  END LOOP;<br />  <br />END;<br /></pre>  |  <pre>CREATE VIEW TEST_ORA_PG.`P_WITH_SELECT_VAR_CROSS_01$emp<br />    `(id, name, surname, lastname, state, dept_id)<br />AS<br />(SELECT<br />       id, name, surname, lastname, <br />       state, dept_id<br />  FROM TEST_ORA_PG.DEPT_EMPLOYEES<br />  WHERE DEPT_ID > 10);<br /><br /><br />CREATE PROCEDURE <br />   test_ora_pg.P_WITH_SELECT_VAR_CROSS_02(IN par_P_STATE DOUBLE)<br />BEGIN<br />    DECLARE var_l_dept_id DOUBLE DEFAULT 10;<br />    DECLARE var$ID DOUBLE;<br />    DECLARE done INT DEFAULT FALSE;<br />    DECLARE cur CURSOR FOR SELECT *<br />                             FROM (SELECT<br />                                      ID<br />                                     FROM <br />                             TEST_ORA_PG.<br />                              `P_WITH_SELECT_VAR_CROSS_01$emp` AS emp<br />                                   WHERE emp.STATE = par_p_state) <br />                                    AS active_emp;<br />    DECLARE CONTINUE HANDLER FOR NOT FOUND<br />        SET done := TRUE;<br />    OPEN cur;<br /><br />    read_label:<br />    LOOP<br />        FETCH cur INTO var$ID;<br /><br />        IF done THEN<br />            LEAVE read_label;<br />        END IF;<br /><br />        BEGIN<br />        END;<br />    END LOOP;<br />    CLOSE cur;<br />END;<br /></pre>  | 

# Migrating from Oracle Database to Amazon RDS for Oracle with AWS Schema Conversion Tool
<a name="CHAP_Source.Oracle.ToRDSOracle"></a>

Some things to consider when migrating Oracle schema and code to Amazon RDS for Oracle: 
+ AWS SCT can add directory objects to the object tree. *Directory objects* are logical structures that each represent a physical directory on the server's file system. You can use directory objects with packages such as DBMS\$1LOB, UTL\$1FILE, DBMS\$1FILE\$1TRANSFER, the DATAPUMP utility, and so on.
+ AWS SCT supports converting Oracle tablespaces to an Amazon RDS for Oracle DB instance. Oracle stores data logically in tablespaces and physically in data files associated with the corresponding tablespace. In Oracle, you can create a tablespace with data file names. Amazon RDS supports Oracle Managed Files (OMF) for data files, log files, and control files only. AWS SCT creates the needed data files during conversion.
+ AWS SCT can convert server-level roles and privileges. The Oracle database engine uses role-based security. A role is a collection of privileges that you can grant to or revoke from a user. A predefined role in Amazon RDS, called DBA, normally allows all administrative privileges on an Oracle database engine. The following privileges are not available for the DBA role on an Amazon RDS DB instance using the Oracle engine:
  + Alter database
  + Alter system
  + Create any directory
  + Grant any privilege
  + Grant any role
  + Create external job

  You can grant all other privileges to an Amazon RDS for Oracle user role, including advanced filtering and column privileges.
+ AWS SCT supports converting Oracle jobs into jobs that can run on Amazon RDS for Oracle. There are a few limitations to the conversion, including the following:
  + Executable jobs are not supported.
  + Schedule jobs that use the ANYDATA data type as an argument are not supported.
+ Oracle Real Application Clusters (RAC) One Node is an option to the Oracle Database Enterprise Edition that was introduced with Oracle Database 11g Release 2. Amazon RDS for Oracle doesn’t support the RAC feature. For high availability, use Amazon RDS Multi-AZ. 

  In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica. This functionality provides data redundancy, eliminates I/O freezes, and minimizes latency spikes during system backups.
+ Oracle Spatial provides a SQL schema and functions that facilitate the storage, retrieval, update, and query of collections of spatial data in an Oracle database. Oracle Locator provides capabilities that are typically required to support internet and wireless service-based applications and partner-based GIS solutions. Oracle Locator is a limited subset of Oracle Spatial.

  To use Oracle Spatial and Oracle Locator features, add the SPATIAL option or LOCATOR option (mutually exclusive) to the option group of your DB instance.

  There are some prerequisites to using Oracle Spatial and Oracle Locator on an Amazon RDS for Oracle DB instance:
  + The instance should use Oracle Enterprise Edition version 12.1.0.2.v6 or higher, or 11.2.0.4.v10 or higher.
  + The instance should be inside a virtual private cloud (VPC).
  + The instance should the DB instance class that can support the Oracle feature. For example, Oracle Spatial is not supported for the db.m1.small, db.t1.micro, db.t2.micro, or db.t2.small DB instance classes. For more information, see [DB instance class support for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html#Oracle.Concepts.InstanceClasses).
  + The instance must have the Auto Minor Version Upgrade option enabled. Amazon RDS updates your DB instance to the latest Oracle PSU if there are security vulnerabilities with a CVSS score of 9\$1 or other announced security vulnerabilities. For more information, see 

    [Settings for Oracle DB instances](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ModifyInstance.Oracle.html#USER_ModifyInstance.Oracle.Settings).
  + If your DB instance is version 11.2.0.4.v10 or higher, you must install the XMLDB option. For more information, see

    [Oracle XML DB](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.XMLDB.html).
  + You should have an Oracle Spatial license from Oracle. For more information, see [Oracle Spatial and Graph](https://shop.oracle.com/apex/product?p1=OracleSpatialandGraph) in the Oracle documentation.
+ Data Guard is included with Oracle Database Enterprise Edition. For high availability, use Amazon RDS Multi-AZ feature. 

  In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica. This functionality provides data redundancy, eliminates I/O freezes, and minimizes latency spikes during system backups.
+ AWS SCT supports converting Oracle DBMS\$1SCHEDULER objects when migrating to Amazon RDS for Oracle. The AWS SCT assessment report indicates if a schedule object can be converted. For more information on using schedule objects with Amazon RDS, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.System.html#Appendix.Oracle.CommonDBATasks.ModifyScheduler).
+ For Oracle to Amazon RDS for Oracle conversions, DB Links is supported. A database link is a schema object in one database that enables you to access objects on another database. The other database doesn’t need to be an Oracle database. However, to access non-Oracle databases you must use Oracle Heterogeneous Services.

  Once you create a database link, you can use the link in SQL statements to refer to tables, views, and PL/SQL objects in the other database. To use a database link, append `@dblink` to the table, view, or PL/SQL object name. You can query a table or view in the other database with the SELECT statement. For more information about using Oracle database links, see the [Oracle documentation](https://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts002.htm#ADMIN12083).

  For more information about using database links with Amazon RDS, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.CommonDBATasks.Database.html#Appendix.Oracle.CommonDBATasks.DBLinks).
+ The AWS SCT assessment report provides server metrics for the conversion. These metrics about your Oracle instance include the following:
  + Computation and memory capacity of the target DB instance.
  + Unsupported Oracle features such as Real Application Clusters that Amazon RDS doesn't support.
  + Disk read-write load
  + Average total disk throughput
  + Server information such as server name, OS, host name, and character set.

## Privileges for RDS for Oracle as a target
<a name="CHAP_Source.Oracle.ToRDSOracle.ConfigureTarget"></a>

To migrate to Amazon RDS for Oracle, create a privileged database user. You can use the following code example.

```
CREATE USER user_name IDENTIFIED BY your_password;

-- System privileges
GRANT DROP ANY CUBE BUILD PROCESS TO user_name;
GRANT ALTER ANY CUBE TO user_name;
GRANT CREATE ANY CUBE DIMENSION TO user_name;
GRANT CREATE ANY ASSEMBLY TO user_name;
GRANT ALTER ANY RULE TO user_name;
GRANT SELECT ANY DICTIONARY TO user_name;
GRANT ALTER ANY DIMENSION TO user_name;
GRANT CREATE ANY DIMENSION TO user_name;
GRANT ALTER ANY TYPE TO user_name;
GRANT DROP ANY TRIGGER TO user_name;
GRANT CREATE ANY VIEW TO user_name;
GRANT ALTER ANY CUBE BUILD PROCESS TO user_name;
GRANT CREATE ANY CREDENTIAL TO user_name;
GRANT DROP ANY CUBE DIMENSION TO user_name;
GRANT DROP ANY ASSEMBLY TO user_name;
GRANT DROP ANY PROCEDURE TO user_name;
GRANT ALTER ANY PROCEDURE TO user_name;
GRANT ALTER ANY SQL TRANSLATION PROFILE TO user_name;
GRANT DROP ANY MEASURE FOLDER TO user_name;
GRANT CREATE ANY MEASURE FOLDER TO user_name;
GRANT DROP ANY CUBE TO user_name;
GRANT DROP ANY MINING MODEL TO user_name;
GRANT CREATE ANY MINING MODEL TO user_name;
GRANT DROP ANY EDITION TO user_name;
GRANT CREATE ANY EVALUATION CONTEXT TO user_name;
GRANT DROP ANY DIMENSION TO user_name;
GRANT ALTER ANY INDEXTYPE TO user_name;
GRANT DROP ANY TYPE TO user_name;
GRANT CREATE ANY PROCEDURE TO user_name;
GRANT CREATE ANY SQL TRANSLATION PROFILE TO user_name;
GRANT CREATE ANY CUBE TO user_name;
GRANT COMMENT ANY MINING MODEL TO user_name;
GRANT ALTER ANY MINING MODEL TO user_name;
GRANT DROP ANY SQL PROFILE TO user_name;
GRANT CREATE ANY JOB TO user_name;
GRANT DROP ANY EVALUATION CONTEXT TO user_name;
GRANT ALTER ANY EVALUATION CONTEXT TO user_name;
GRANT CREATE ANY INDEXTYPE TO user_name;
GRANT CREATE ANY OPERATOR TO user_name;
GRANT CREATE ANY TRIGGER TO user_name;
GRANT DROP ANY ROLE TO user_name;
GRANT DROP ANY SEQUENCE TO user_name;
GRANT DROP ANY CLUSTER TO user_name;
GRANT DROP ANY SQL TRANSLATION PROFILE TO user_name;
GRANT ALTER ANY ASSEMBLY TO user_name;
GRANT CREATE ANY RULE SET TO user_name;
GRANT ALTER ANY OUTLINE TO user_name;
GRANT UNDER ANY TYPE TO user_name;
GRANT CREATE ANY TYPE TO user_name;
GRANT DROP ANY MATERIALIZED VIEW TO user_name;
GRANT ALTER ANY ROLE TO user_name;
GRANT DROP ANY VIEW TO user_name;
GRANT ALTER ANY INDEX TO user_name;
GRANT COMMENT ANY TABLE TO user_name;
GRANT CREATE ANY TABLE TO user_name;
GRANT CREATE USER TO user_name;
GRANT DROP ANY RULE SET TO user_name;
GRANT CREATE ANY CONTEXT TO user_name;
GRANT DROP ANY INDEXTYPE TO user_name;
GRANT ALTER ANY OPERATOR TO user_name;
GRANT CREATE ANY MATERIALIZED VIEW TO user_name;
GRANT ALTER ANY SEQUENCE TO user_name;
GRANT DROP ANY SYNONYM TO user_name;
GRANT CREATE ANY SYNONYM TO user_name;
GRANT DROP USER TO user_name;
GRANT ALTER ANY MEASURE FOLDER TO user_name;
GRANT ALTER ANY EDITION TO user_name;
GRANT DROP ANY RULE TO user_name;
GRANT CREATE ANY RULE TO user_name;
GRANT ALTER ANY RULE SET TO user_name;
GRANT CREATE ANY OUTLINE TO user_name;
GRANT UNDER ANY TABLE TO user_name;
GRANT UNDER ANY VIEW TO user_name;
GRANT DROP ANY DIRECTORY TO user_name;
GRANT ALTER ANY CLUSTER TO user_name;
GRANT CREATE ANY CLUSTER TO user_name;
GRANT ALTER ANY TABLE TO user_name;
GRANT CREATE ANY CUBE BUILD PROCESS TO user_name;
GRANT ALTER ANY CUBE DIMENSION TO user_name;
GRANT CREATE ANY EDITION TO user_name;
GRANT CREATE ANY SQL PROFILE TO user_name;
GRANT ALTER ANY SQL PROFILE TO user_name;
GRANT DROP ANY OUTLINE TO user_name;
GRANT DROP ANY CONTEXT TO user_name;
GRANT DROP ANY OPERATOR TO user_name;
GRANT DROP ANY LIBRARY TO user_name;
GRANT ALTER ANY LIBRARY TO user_name;
GRANT CREATE ANY LIBRARY TO user_name;
GRANT ALTER ANY MATERIALIZED VIEW TO user_name;
GRANT ALTER ANY TRIGGER TO user_name;
GRANT CREATE ANY SEQUENCE TO user_name;
GRANT DROP ANY INDEX TO user_name;
GRANT CREATE ANY INDEX TO user_name;
GRANT DROP ANY TABLE TO user_name;
GRANT SELECT_CATALOG_ROLE TO user_name;
GRANT SELECT ANY SEQUENCE TO user_name;

-- Database Links
GRANT CREATE DATABASE LINK TO user_name;
GRANT CREATE PUBLIC DATABASE LINK TO user_name;
GRANT DROP PUBLIC DATABASE LINK TO user_name;


-- Server Level Objects (directory)
GRANT CREATE ANY DIRECTORY TO user_name;
GRANT DROP ANY DIRECTORY TO user_name;
-- (for RDS only)
GRANT EXECUTE ON RDSADMIN.RDSADMIN_UTIL TO user_name;

-- Server Level Objects (tablespace)
GRANT CREATE TABLESPACE TO user_name;
GRANT DROP TABLESPACE TO user_name;

-- Server Level Objects (user roles)
/* (grant source privileges with admin option or convert roles/privs as DBA) */

-- Queues
grant execute on DBMS_AQADM to user_name;
grant aq_administrator_role to user_name;

-- for Materialized View Logs creation
GRANT SELECT ANY TABLE TO user_name;

-- Roles
GRANT RESOURCE TO user_name;
GRANT CONNECT TO user_name;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

## Limitations when converting Oracle to Amazon RDS for Oracle
<a name="CHAP_Source.Oracle.ToRDSOracle.Limitations"></a>

Some limitations you should consider when migrating Oracle schema and code to Amazon RDS for Oracle: 
+  A predefined role in Amazon RDS, called DBA, normally allows all administrative privileges on an Oracle database engine. The following privileges are not available for the DBA role on an Amazon RDS DB instance using the Oracle engine:
  + Alter database
  + Alter system
  + Create any directory
  + Grant any privilege
  + Grant any role
  + Create external job

  You can grant all other privileges to an Oracle RDS user role.
+ Amazon RDS for Oracle supports traditional auditing, fine-grained auditing using the DBMS\$1FGA package, and Oracle Unified Auditing.
+ Amazon RDS for Oracle doesn’t support change data capture (CDC). To do CDC during and after a database migration, use AWS Database Migration Service.

# Connecting to PostgreSQL Databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.PostgreSQL"></a>

You can use AWS SCT to convert schemas, database code objects, and application code from PostgreSQL to the following targets: 
+ Amazon RDS for MySQL
+ Amazon Aurora MySQL-Compatible Edition
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition

For more information, see the following sections:

**Topics**
+ [Privileges for PostgreSQL as a source database](#CHAP_Source.PostgreSQL.Permissions)
+ [Connecting to PostgreSQL as a source](#CHAP_Source.PostgreSQL.Connecting)
+ [Privileges for MySQL as a target database](#CHAP_Source.PostgreSQL.ConfigureMySQL)

## Privileges for PostgreSQL as a source database
<a name="CHAP_Source.PostgreSQL.Permissions"></a>

The privileges required for PostgreSQL as a source are as follows: 
+ CONNECT ON DATABASE *<database\$1name>* 
+ USAGE ON SCHEMA *<database\$1name>* 
+ SELECT ON ALL TABLES IN SCHEMA *<database\$1name>* 
+ SELECT ON ALL SEQUENCES IN SCHEMA *<database\$1name>* 

## Connecting to PostgreSQL as a source
<a name="CHAP_Source.PostgreSQL.Connecting"></a>

Use the following procedure to connect to your PostgreSQL source database with the AWS Schema Conversion Tool. 

**To connect to a PostgreSQL source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **PostgreSQL**, then choose **Next**.

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the PostgreSQL source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.PostgreSQL.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Privileges for MySQL as a target database
<a name="CHAP_Source.PostgreSQL.ConfigureMySQL"></a>

The privileges required for MySQL as a target when you migrate from PostgreSQL are as follows:
+ CREATE ON \$1.\$1
+ ALTER ON \$1.\$1
+ DROP ON \$1.\$1
+ INDEX ON \$1.\$1
+ REFERENCES ON \$1.\$1
+ SELECT ON \$1.\$1
+ CREATE VIEW ON \$1.\$1
+ SHOW VIEW ON \$1.\$1
+ TRIGGER ON \$1.\$1
+ CREATE ROUTINE ON \$1.\$1
+ ALTER ROUTINE ON \$1.\$1
+ EXECUTE ON \$1.\$1
+ INSERT, UPDATE ON AWS\$1POSTGRESQL\$1EXT.\$1
+ INSERT, UPDATE, DELETE ON AWS\$1POSTGRESQL\$1EXT\$1DATA.\$1
+ CREATE TEMPORARY TABLES ON AWS\$1POSTGRESQL\$1EXT\$1DATA.\$1

You can use the following code example to create a database user and grant the privileges.

```
CREATE USER 'user_name' IDENTIFIED BY 'your_password';
GRANT CREATE ON *.* TO 'user_name';
GRANT ALTER ON *.* TO 'user_name';
GRANT DROP ON *.* TO 'user_name';
GRANT INDEX ON *.* TO 'user_name';
GRANT REFERENCES ON *.* TO 'user_name';
GRANT SELECT ON *.* TO 'user_name';
GRANT CREATE VIEW ON *.* TO 'user_name';
GRANT SHOW VIEW ON *.* TO 'user_name';
GRANT TRIGGER ON *.* TO 'user_name';
GRANT CREATE ROUTINE ON *.* TO 'user_name';
GRANT ALTER ROUTINE ON *.* TO 'user_name';
GRANT EXECUTE ON *.* TO 'user_name';
GRANT INSERT, UPDATE ON AWS_POSTGRESQL_EXT.* TO 'user_name';
GRANT INSERT, UPDATE, DELETE ON AWS_POSTGRESQL_EXT_DATA.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON AWS_POSTGRESQL_EXT_DATA.* TO 'user_name';
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

To use Amazon RDS for MySQL or Aurora MySQL as a target, set the `lower_case_table_names` parameter to `1`. This value means that the MySQL server handles identifiers of such object names as tables, indexes, triggers, and databases as case insensitive. If you have turned on binary logging in your target instance, then set the `log_bin_trust_function_creators` parameter to `1`. In this case, you don't need to use the `DETERMINISTIC`, `READS SQL DATA` or `NO SQL` characteristics to create stored functions. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

# Connecting to SAP Databases with the AWS Schema Conversion Tool
<a name="CHAP_Source.SAP"></a>

You can use AWS SCT to convert schemas, database code objects, and application code from SAP (Sybase) Adaptive Server Enterprise (ASE) to the following targets: 
+ Amazon RDS for MySQL
+ Amazon Aurora MySQL-Compatible Edition
+ Amazon RDS for MariaDB
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition

For more information, see the following sections:

**Topics**
+ [Privileges for SAP ASE as a source database](#CHAP_Source.SAP.Permissions)
+ [Connecting to SAP ASE (Sybase) as a source](#CHAP_Source.SAP.Connecting)
+ [Privileges for MySQL as a target database](#CHAP_Source.SAP.ConfigureMySQL)
+ [SAP ASE to MySQL conversion settings](#CHAP_Source.SAP.MySQLConversionSettings)
+ [Privileges for PostgreSQL as a target database](#CHAP_Source.SAP.ConfigurePostgreSQL)
+ [SAP ASE to PostgreSQL conversion settings](#CHAP_Source.SAP.PostgreSQLConversionSettings)

## Privileges for SAP ASE as a source database
<a name="CHAP_Source.SAP.Permissions"></a>

To use an SAP ASE database as a source, you create a database user and grant permissions. To do this, take the following steps.

**Create and configure a database user**

1. Connect to the source database.

1. Create a database user with the following commands. Provide a password for the new user.

   ```
   USE master
   CREATE LOGIN min_privs WITH PASSWORD <password>
   sp_adduser min_privs
   grant select on dbo.spt_values to min_privs
   grant select on asehostname to min_privs
   ```

1. For every database you are going to migrate, grant the following privileges.

   ```
   USE <database_name>
   sp_adduser min_privs
   grant select on dbo.sysusers to min_privs
   grant select on dbo.sysobjects to min_privs
   grant select on dbo.sysindexes to min_privs
   grant select on dbo.syscolumns to min_privs
   grant select on dbo.sysreferences to min_privs
   grant select on dbo.syscomments to min_privs
   grant select on dbo.syspartitions to min_privs
   grant select on dbo.syspartitionkeys to min_privs
   grant select on dbo.sysconstraints to min_privs
   grant select on dbo.systypes to min_privs
   grant select on dbo.sysqueryplans to min_privs
   ```

## Connecting to SAP ASE (Sybase) as a source
<a name="CHAP_Source.SAP.Connecting"></a>

Use the following procedure to connect to your SAP ASE source database with the AWS Schema Conversion Tool. 

**To connect to a SAP ASE source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **SAP ASE**, then choose **Next**.

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the SAP ASE source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.SAP.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Privileges for MySQL as a target database
<a name="CHAP_Source.SAP.ConfigureMySQL"></a>

The privileges required for MySQL as a target are as follows:
+ CREATE ON \$1.\$1
+ ALTER ON \$1.\$1
+ DROP ON \$1.\$1
+ INDEX ON \$1.\$1
+ REFERENCES ON \$1.\$1
+ SELECT ON \$1.\$1
+ CREATE VIEW ON \$1.\$1
+ SHOW VIEW ON \$1.\$1
+ TRIGGER ON \$1.\$1
+ CREATE ROUTINE ON \$1.\$1
+ ALTER ROUTINE ON \$1.\$1
+ EXECUTE ON \$1.\$1
+ INSERT, UPDATE ON AWS\$1SAPASE\$1EXT.\$1
+ CREATE TEMPORARY TABLES ON AWS\$1SAPASE\$1EXT.\$1

You can use the following code example to create a database user and grant the privileges.

```
CREATE USER 'user_name' IDENTIFIED BY 'your_password';
GRANT CREATE ON *.* TO 'user_name';
GRANT ALTER ON *.* TO 'user_name';
GRANT DROP ON *.* TO 'user_name';
GRANT INDEX ON *.* TO 'user_name';
GRANT REFERENCES ON *.* TO 'user_name';
GRANT SELECT ON *.* TO 'user_name';
GRANT CREATE VIEW ON *.* TO 'user_name';
GRANT SHOW VIEW ON *.* TO 'user_name';
GRANT TRIGGER ON *.* TO 'user_name';
GRANT CREATE ROUTINE ON *.* TO 'user_name';
GRANT ALTER ROUTINE ON *.* TO 'user_name';
GRANT EXECUTE ON *.* TO 'user_name';
GRANT INSERT, UPDATE ON AWS_SAPASE_EXT.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON AWS_SAPASE_EXT.* TO 'user_name';
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

To use Amazon RDS for MySQL or Aurora MySQL as a target, set the `lower_case_table_names` parameter to `1`. This value means that the MySQL server handles identifiers of such object names as tables, indexes, triggers, and databases as case insensitive. If you have turned on binary logging in your target instance, then set the `log_bin_trust_function_creators` parameter to `1`. In this case, you don't need to use the `DETERMINISTIC`, `READS SQL DATA` or `NO SQL` characteristics to create stored functions. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

## SAP ASE to MySQL conversion settings
<a name="CHAP_Source.SAP.MySQLConversionSettings"></a>

To edit SAP ASE to MySQL conversion settings, choose **Settings**, and then choose **Conversion settings**. From the upper list, choose **SAP ASE**, and then choose **SAP ASE – MySQL** or **SAP ASE – Amazon Aurora (MySQL compatible)**. AWS SCT displays all available settings for SAP ASE to PostgreSQL conversion.

SAP ASE to MySQL conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To use the exact names of the source database objects in the converted code.

  By default, AWS SCT converts the names of database objects, variables, and parameters to lowercase. To keep the original case for these names, select **Treat source database object names as case sensitive**. Choose this option if you use case-sensitive object names in your source SAP ASE database server.

## Privileges for PostgreSQL as a target database
<a name="CHAP_Source.SAP.ConfigurePostgreSQL"></a>

To use PostgreSQL as a target, AWS SCT requires the `CREATE ON DATABASE` privilege. Make sure that you grant this privilege for each target PostgreSQL database.

To use the converted public synonyms, change the database default search path to `"$user", public_synonyms, public`.

You can use the following code example to create a database user and grant the privileges.

```
CREATE ROLE user_name LOGIN PASSWORD 'your_password';
GRANT CREATE ON DATABASE db_name TO user_name;
ALTER DATABASE db_name SET SEARCH_PATH = "$user", public_synonyms, public;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *db\$1name* with the name of your target database. Finally, replace *your\$1password* with a secure password.

In PostgreSQL, only the schema owner or a `superuser` can drop a schema. The owner can drop a schema and all objects that this schema includes even if the owner of the schema doesn't own some of its objects.

When you use different users to convert and apply different schemas to your target database, you can get an error message when AWS SCT can't drop a schema. To avoid this error message, use the `superuser` role. 

## SAP ASE to PostgreSQL conversion settings
<a name="CHAP_Source.SAP.PostgreSQLConversionSettings"></a>

To edit SAP ASE to PostgreSQL conversion settings, choose **Settings**, and then choose **Conversion settings**. From the upper list, choose **SAP ASE**, and then choose **SAP ASE – PostgreSQL** or **SAP ASE – Amazon Aurora (PostgreSQL compatible)**. AWS SCT displays all available settings for SAP ASE to PostgreSQL conversion.

SAP ASE to PostgreSQL conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To define the template to use for the schema names in the converted code. For **Schema name generation template**, choose one of the following options:
  + **<source\$1db>** – Uses the SAP ASE database name as a schema name in PostgreSQL.
  + **<source\$1schema>** – Uses the SAP ASE schema name as a schema name in PostgreSQL.
  + **<source\$1db>\$1<schema>** – Uses a combination of the SAP ASE database and schema names as a schema name in PostgreSQL.
+ To use the exact names of the source database objects in the converted code.

  By default, AWS SCT converts the names of database objects, variables, and parameters to lowercase. To keep the original case for these names, select **Treat source database object names as case sensitive**. Choose this option if you use case-sensitive object names in your source SAP ASE database server.

  For case-sensitive operations, AWS SCT can avoid the conversion of database object names to lowercase. To do so, select **Avoid casting to lowercase for case sensitive operations**.
+ To allow the use of indexes with the same name in different tables in SAP ASE.

  In PostgreSQL, all index names that you use in the schema must be unique. To make sure that AWS SCT generates unique names for all your indexes, select **Generate unique names for indexes**.

# Connect Microsoft SQL Servers with AWS Schema Conversion Tool
<a name="CHAP_Source.SQLServer"></a>

You can use AWS SCT to convert schemas, database code objects, and application code from SQL Server to the following targets: 
+ Amazon RDS for MySQL
+ Amazon Aurora MySQL-Compatible Edition
+ Amazon RDS for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition
+ Amazon RDS for SQL Server
+ Amazon RDS for MariaDB

**Note**  
AWS SCT does not support using Amazon RDS for SQL server as a source.

You can use AWS SCT to create an assessment report for the migration of schemas, database code objects, and application code from SQL Server to Babelfish for Aurora PostgreSQL, as described following.

**Topics**
+ [Privileges for Microsoft SQL Server as a source](#CHAP_Source.SQLServer.Permissions)
+ [Using Windows Authentication when using Microsoft SQL Server as a source](#CHAP_Source.SQLServer.Permissions.WinAuth)
+ [Connecting to SQL Server as a source](#CHAP_Source.SQLServer.Connecting)
+ [Converting SQL Server to MySQL](CHAP_Source.SQLServer.ToMySQL.md)
+ [Migrating from SQL Server to PostgreSQL with AWS Schema Conversion Tool](CHAP_Source.SQLServer.ToPostgreSQL.md)
+ [Migrating from SQL Server to Amazon RDS for SQL Server with AWS Schema Conversion Tool](CHAP_Source.SQLServer.ToRDSSQLServer.md)

## Privileges for Microsoft SQL Server as a source
<a name="CHAP_Source.SQLServer.Permissions"></a>

The privileges required for Microsoft SQL Server as a source are as follows: 
+ VIEW DEFINITION
+ VIEW DATABASE STATE

The `VIEW DEFINITION` privilege enables users that have public access to see object definitions. AWS SCT uses the `VIEW DATABASE STATE` privilege to check the features of the SQL Server Enterprise edition.

Repeat the grant for each database whose schema you are converting.

In addition, grant the following privileges on the `master` database:
+ VIEW SERVER STATE
+ VIEW ANY DEFINITION

AWS SCT uses the `VIEW SERVER STATE` privilege to collect server settings and configuration. Make sure that you grant the `VIEW ANY DEFINITION` privilege to view endpoints.

To read information about Microsoft Analysis Services, run the following command on the `master` database.

```
EXEC master..sp_addsrvrolemember @loginame = N'<user_name>', @rolename = N'sysadmin'
```

In the preceding example, replace the `<user_name>` placeholder with the name of the user that you granted with the privileges before.

To read information about SQL Server Agent, add your user to the `SQLAgentUser` role. Run the following command on the `msdb` database.

```
EXEC sp_addrolemember <SQLAgentRole>, <user_name>;
```

In the preceding example, replace the `<SQLAgentRole>` placeholder with the name of the SQL Server Agent role. Then replace the `<user_name>` placeholder with the name of the user that you granted with the privileges before. For more information, see [Adding a user to the SQLAgentUser role](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.CommonDBATasks.Agent.html#SQLServerAgent.AddUser) in the *Amazon RDS User Guide*.

To detect log shipping, grant the `SELECT on dbo.log_shipping_primary_databases` privilege on the `msdb` database.

To use the notification approach of the DDL replication, grant the `RECEIVE ON <schema_name>.<queue_name>` privilege on your source databases. In this example, replace the `<schema_name>` placeholder with the schema name of your database. Then, replace the `<queue_name>` placeholder with the name of a queue table.

## Using Windows Authentication when using Microsoft SQL Server as a source
<a name="CHAP_Source.SQLServer.Permissions.WinAuth"></a>

If your application runs on a Windows-based intranet, you might be able to use Windows Authentication for database access. Windows Authentication uses the current Windows identity established on the operating system thread to access the SQL Server database. You can then map the Windows identity to a SQL Server database and permissions. To connect to SQL Server using Windows Authentication, you must specify the Windows identity that your application is using. You must also grant the Windows identity access to the SQL Server database.

SQL Server has two modes of access: Windows Authentication mode and Mixed Mode. Windows Authentication mode enables Windows Authentication and disables SQL Server Authentication. Mixed Mode enables both Windows Authentication and SQL Server Authentication. Windows Authentication is always available and cannot be disabled. For more information about Windows Authentication, see the Microsoft Windows documentation. 

The possible example for creating a user in TEST\$1DB is shown following.

```
USE [TEST_DB]
CREATE USER [TestUser] FOR LOGIN [TestDomain\TestUser]
GRANT VIEW DEFINITION TO [TestUser]
GRANT VIEW DATABASE STATE TO [TestUser]
```

### Using Windows Authentication with a JDBC connection
<a name="CHAP_Source.SQLServer.Permissions.WinAuth.JDBC"></a>

The JDBC driver does not support Windows Authentication when the driver is used on non-Windows operating systems. Windows Authentication credentials, such as user name and password, are not automatically specified when connecting to SQL Server from non-Windows operating systems. In such cases, the applications must use SQL Server Authentication instead.

In JDBC connection string, the parameter `integratedSecurity` must be specified to connect using Windows Authentication. The JDBC driver supports Integrated Windows Authentication on Windows operating systems through the `integratedSecurity` connection string parameter.

To use integrated authentication

1. Install the JDBC driver.

1. Copy the `sqljdbc_auth.dll` file to a directory on the Windows system path on the computer where the JDBC driver is installed.

   The `sqljdbc_auth.dll` files are installed in the following location:

   <*installation directory*>\$1sqljdbc\$1<*version*>\$1<*language*>\$1auth\$1

When you try to establish a connection to SQL Server database using Windows Authentication, you might get this error: This driver is not configured for integrated authentication. This problem can be solved by performing the following actions:
+ Declare two variables that point to the installed path of your JDBC:

   `variable name: SQLJDBC_HOME; variable value: D:\lib\JDBC4.1\enu` (where your sqljdbc4.jar exists);

  `variable name: SQLJDBC_AUTH_HOME; variable value: D\lib\JDBC4.1\enu\auth\x86` (if you are running 32bit OS) or `D\lib\JDBC4.1\enu\auth\x64` (if you are running 64bit OS). This is where your `sqljdbc_auth.dll` is located. 
+ Copy `sqljdbc_auth.dll` to the folder where your JDK/JRE is running. You may copy to lib folder, bin folder, and so on. As an example, you might copy to the following folder.

  ```
  [JDK_INSTALLED_PATH]\bin;
  [JDK_INSTALLED_PATH]\jre\bin;
  [JDK_INSTALLED_PATH]\jre\lib;
  [JDK_INSTALLED_PATH]\lib;
  ```
+ Ensure that in your JDBC library folder, you have only the SQLJDBC4.jar file. Remove any other sqljdbc\$1.jar files from that folder (or copy them to another folder). If you are adding the driver as part of your program, ensure that you add only SQLJDBC4.jar as the driver to use.
+ Copy sqljdbc\$1auth.dll the file in the folder with your application.

**Note**  
If you are running a 32-bit Java Virtual Machine (JVM), use the sqljdbc\$1auth.dll file in the x86 folder, even if the operating system is the x64 version. If you are running a 64-bit JVM on a x64 processor, use the sqljdbc\$1auth.dll file in the x64 folder.

When you connect to a SQL Server database, you can choose either **Windows Authentication** or **SQL Server Authentication** for the **Authentication** option.

## Connecting to SQL Server as a source
<a name="CHAP_Source.SQLServer.Connecting"></a>

Use the following procedure to connect to your Microsoft SQL Server source database with the AWS Schema Conversion Tool. 

**To connect to a Microsoft SQL Server source database**

1. In the AWS Schema Conversion Tool, choose **Add source**.

1. Choose **Microsoft SQL Server**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Microsoft SQL Server source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.SQLServer.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

# Converting SQL Server to MySQL
<a name="CHAP_Source.SQLServer.ToMySQL"></a>

To emulate Microsoft SQL Server database functions in your converted MySQL code, use the SQL Server to MySQL extension pack in AWS SCT. For more information about extension packs, see [Using extension packs with AWS Schema Conversion Tool](CHAP_ExtensionPack.md). 

**Topics**
+ [Privileges for MySQL as a target database](#CHAP_Source.SQLServer.ToMySQL.ConfigureTarget)
+ [SQL Server to MySQL conversion settings](#CHAP_Source.SQLServer.ToMySQL.ConversionSettings)
+ [Migration considerations](#CHAP_Source.SQLServer.ToMySQL.MigrationConsiderations)

## Privileges for MySQL as a target database
<a name="CHAP_Source.SQLServer.ToMySQL.ConfigureTarget"></a>

The privileges required for MySQL as a target are as follows:
+ CREATE ON \$1.\$1
+ ALTER ON \$1.\$1
+ DROP ON \$1.\$1
+ INDEX ON \$1.\$1
+ REFERENCES ON \$1.\$1
+ SELECT ON \$1.\$1
+ CREATE VIEW ON \$1.\$1
+ SHOW VIEW ON \$1.\$1
+ TRIGGER ON \$1.\$1
+ CREATE ROUTINE ON \$1.\$1
+ ALTER ROUTINE ON \$1.\$1
+ EXECUTE ON \$1.\$1
+ INSERT, UPDATE ON AWS\$1SQLSERVER\$1EXT.\$1
+ INSERT, UPDATE, DELETE ON AWS\$1SQLSERVER\$1EXT\$1DATA.\$1
+ CREATE TEMPORARY TABLES ON AWS\$1SQLSERVER\$1EXT\$1DATA.\$1

You can use the following code example to create a database user and grant the privileges.

```
CREATE USER 'user_name' IDENTIFIED BY 'your_password';
GRANT CREATE ON *.* TO 'user_name';
GRANT ALTER ON *.* TO 'user_name';
GRANT DROP ON *.* TO 'user_name';
GRANT INDEX ON *.* TO 'user_name';
GRANT REFERENCES ON *.* TO 'user_name';
GRANT SELECT ON *.* TO 'user_name';
GRANT CREATE VIEW ON *.* TO 'user_name';
GRANT SHOW VIEW ON *.* TO 'user_name';
GRANT TRIGGER ON *.* TO 'user_name';
GRANT CREATE ROUTINE ON *.* TO 'user_name';
GRANT ALTER ROUTINE ON *.* TO 'user_name';
GRANT EXECUTE ON *.* TO 'user_name';
GRANT INSERT, UPDATE ON AWS_SQLSERVER_EXT.* TO 'user_name';
GRANT INSERT, UPDATE, DELETE ON AWS_SQLSERVER_EXT_DATA.* TO 'user_name';
GRANT CREATE TEMPORARY TABLES ON AWS_SQLSERVER_EXT_DATA.* TO 'user_name';
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *your\$1password* with a secure password.

If you use a MySQL database version 5.7 or lower as a target, then run the following command. For MySQL databases version 8.0 and higher, this command is deprecated.

```
GRANT SELECT ON mysql.proc TO 'user_name';
```

To use Amazon RDS for MySQL or Aurora MySQL as a target, set the `lower_case_table_names` parameter to `1`. This value means that the MySQL server handles identifiers of such object names as tables, indexes, triggers, and databases as case insensitive. If you have turned on binary logging in your target instance, then set the `log_bin_trust_function_creators` parameter to `1`. In this case, you don't need to use the `DETERMINISTIC`, `READS SQL DATA` or `NO SQL` characteristics to create stored functions. To configure these parameters, create a new DB parameter group or modify an existing DB parameter group.

## SQL Server to MySQL conversion settings
<a name="CHAP_Source.SQLServer.ToMySQL.ConversionSettings"></a>

To edit SQL Server to MySQL conversion settings, in AWS SCT choose **Settings**, and then choose **Conversion settings**. From the upper list, choose **SQL Server**, and then choose **SQL Server – MySQL**. AWS SCT displays all available settings for SQL Server to MySQL conversion.

SQL Server to MySQL conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To allow your source SQL Server database to store the output of `EXEC` in a table. AWS SCT creates temporary tables and an additional procedure to emulate this feature. To use this emulation, select **Create additional routines for handling open datasets**.

## Migration considerations
<a name="CHAP_Source.SQLServer.ToMySQL.MigrationConsiderations"></a>

Consider these things when migrating a SQL Server schema to MySQL:
+ MySQL doesn’t support the `MERGE` statement. However, AWS SCT can emulate the `MERGE` statement during conversion by using the `INSERT ON DUPLICATE KEY` clause and the `UPDATE FROM and DELETE FROM` statements.

  For correct emulation using `INSERT ON DUPLICATE KEY`, make sure that a unique constraint or primary key exists on the target MySQL database.
+ You can use a `GOTO` statement and a label to change the order that statements are run in. Any Transact-SQL statements that follow a `GOTO` statement are skipped, and processing continues at the label. You can use `GOTO` statements and labels anywhere within a procedure, batch, or statement block. You can also nest `GOTO` statements.

  MySQL doesn’t use `GOTO` statements. When AWS SCT converts code that contains a `GOTO` statement, it converts the statement to use a `BEGIN…END` or `LOOP…END LOOP` statement. You can find examples of how AWS SCT converts `GOTO` statements in the table following.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.SQLServer.ToMySQL.html)
+ MySQL doesn't support multistatement table-valued functions. AWS SCT simulates table-valued functions during a conversion by creating temporary tables and rewriting statements to use these temporary tables.

# Migrating from SQL Server to PostgreSQL with AWS Schema Conversion Tool
<a name="CHAP_Source.SQLServer.ToPostgreSQL"></a>

You can use the SQL Server to PostgreSQL extension pack in AWS SCT. This extension pack emulates SQL Server database functions in the converted PostgreSQL code. Use the SQL Server to PostgreSQL extension pack to emulate SQL Server Agent and SQL Server Database Mail. For more information about extension packs, see [Using extension packs with AWS Schema Conversion Tool](CHAP_ExtensionPack.md). 

**Topics**
+ [Privileges for PostgreSQL as a target database](#CHAP_Source.SQLServer.ToPostgreSQL.ConfigurePostgreSQL)
+ [SQL Server to PostgreSQL conversion settings](#CHAP_Source.SQLServer.ToPostgreSQL.ConversionSettings)
+ [Converting SQL Server partitions to PostgreSQL version 10 partitions](#CHAP_Source.SQLServer.ToPostgreSQL.PG10Partitions)
+ [Migration considerations](#CHAP_Source.SQLServer.ToPostgreSQL.MigrationConsiderations)
+ [Using an AWS SCT extension pack to emulate SQL Server Agent in PostgreSQL](CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Agent.md)
+ [Using an AWS SCT extension pack to emulate SQL Server Database Mail in PostgreSQL](CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail.md)

## Privileges for PostgreSQL as a target database
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ConfigurePostgreSQL"></a>

To use PostgreSQL as a target, AWS SCT requires the `CREATE ON DATABASE` privilege. Make sure that you grant this privilege for each target PostgreSQL database.

To use the converted public synonyms, change the database default search path to `"$user", public_synonyms, public`.

You can use the following code example to create a database user and grant the privileges.

```
CREATE ROLE user_name LOGIN PASSWORD 'your_password';
GRANT CREATE ON DATABASE db_name TO user_name;
ALTER DATABASE db_name SET SEARCH_PATH = "$user", public_synonyms, public;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *db\$1name* with the name of your target database. Finally, replace *your\$1password* with a secure password.

In PostgreSQL, only the schema owner or a `superuser` can drop a schema. The owner can drop a schema and all objects that this schema includes even if the owner of the schema doesn't own some of its objects.

When you use different users to convert and apply different schemas to your target database, you can get an error message when AWS SCT can't drop a schema. To avoid this error message, use the `superuser` role. 

## SQL Server to PostgreSQL conversion settings
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ConversionSettings"></a>

To edit SQL Server to PostgreSQL conversion settings, choose **Settings**, and then choose **Conversion settings**. From the upper list, choose **SQL Server**, and then choose **SQL Server – PostgreSQL**. AWS SCT displays all available settings for SQL Server to PostgreSQL conversion.

SQL Server to PostgreSQL conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To allow to use indexes with the same name in different tables in SQL Server.

  In PostgreSQL, all index names that you use in the schema, must be unique. To make sure that AWS SCT generates unique names for all your indexes, select **Generate unique names for indexes**.
+ To convert SQL Server procedures to PostgreSQL functions.

  PostgreSQL version 10 and earlier doesn't support procedures. For customers who aren't familiar with using procedures in PostgreSQL, AWS SCT can convert procedures to functions. To do so, select **Convert procedures to functions**.
+ To emulate the output of `EXEC` in a table.

  Your source SQL Server database can store the output of `EXEC` in a table. AWS SCT creates temporary tables and an additional procedure to emulate this feature. To use this emulation, select **Create additional routines for handling open datasets**.
+ To define the template to use for the schema names in the converted code. For **Schema name generation template**, choose one of the following options:
  + **<source\$1db>** – Uses the SQL Server database name as a schema name in PostgreSQL.
  + **<source\$1schema>** – Uses the SQL Server schema name as a schema name in PostgreSQL.
  + **<source\$1db>\$1<schema>** – Uses a combination of the SQL Server database and schema names as a schema name in PostgreSQL.
+ To keep the letter case of your source object names.

  To avoid conversion of object names to lower case, select **Avoid casting to lower case for case sensitive operations**. This option applies only when you turn on case sensitivity option in your target database.
+ To keep the parameter names from your source database.

  To add double quotation marks to the names of parameters in the converted code, select **Keep original parameter names**.

## Converting SQL Server partitions to PostgreSQL version 10 partitions
<a name="CHAP_Source.SQLServer.ToPostgreSQL.PG10Partitions"></a>

When you convert a Microsoft SQL Server database to Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) or Amazon Relational Database Service for PostgreSQL (Amazon RDS for PostgreSQL), be aware of the following.

In SQL Server, you create partitions with partition functions. When converting from a SQL Server portioned table to a PostgreSQL version 10 partitioned table, be aware of several potential issues:
+ SQL Server allows you to partition a table using a column without a NOT NULL constraint. In that case, all NULL values go to the leftmost partition. PostgreSQL doesn’t support NULL values for RANGE partitioning.
+ SQL Server allows you to create primary and unique keys for partitioned tables. For PostgreSQL, you create primary or unique keys for each partition directly. Thus, PRIMARY or UNIQUE KEY constraint must be removed from their parent table when migrating to PostgreSQL. The resulting key names take the format `<original_key_name>_<partition_number>`.
+ SQL Server allows you to create foreign key constraint from and to partitioned tables. PostgreSQL doesn’t support foreign keys referencing partitioned tables. Also, PostgreSQL doesn’t support foreign key references from a partitioned table to another table.
+ SQL Server allows you to create indexes for partitioned tables. For PostgreSQL, an index should be created for each partition directly. Thus, indexes must be removed from their parent tables when migrating to PostgreSQL. The resulting index names take the format `<original_index_name>_<partition_number>`.
+  PostgreSQL doesn’t support partitioned indexes.

## Migration considerations
<a name="CHAP_Source.SQLServer.ToPostgreSQL.MigrationConsiderations"></a>

Some things to consider when migrating a SQL Server schema to PostgreSQL: 
+ In PostgreSQL, all object’s names in a schema must be unique, including indexes. Index names must be unique in the schema of the base table. In SQL Server, an index name can be the same for different tables.

  To ensure the uniqueness of index names, AWS SCT gives you the option to generate unique index names if your index names are not unique. To do this, choose the option **Generate unique index names** in the project properties. By default, this option is enabled. If this option is enabled, unique index names are created using the format IX\$1table\$1name\$1index\$1name. If this option is disabled, index names aren’t changed.
+ A GOTO statement and a label can be used to change the order that statements are run in. Any Transact-SQL statements that follow a GOTO statement are skipped and processing continues at the label. GOTO statements and labels can be used anywhere within a procedure, batch, or statement block. GOTO statements can also be nested.

  PostgreSQL doesn’t use GOTO statements. When AWS SCT converts code that contains a GOTO statement, it converts the statement to use a BEGIN…END or LOOP…END LOOP statement. You can find examples of how AWS SCT converts GOTO statements in the table following.  
**SQL Server GOTO statements and the converted PostgreSQL statements**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.SQLServer.ToPostgreSQL.html)
+ PostgreSQL doesn't support a MERGE statement. AWS SCT emulates the behavior of a MERGE statement in the following ways:
  + By INSERT ON CONFLICT construction.
  + By using the UPDATE FROM DML statement, such as MERGE without a WHEN NOT MATCHED clause.
  + By using CURSOR, such as with a MERGE with DELETE clause or by using a complex MERGE ON condition statement.
+ AWS SCT can add database triggers to the object tree when Amazon RDS is the target.
+ AWS SCT can add server-level triggers to the object tree when Amazon RDS is the target.
+ SQL Server automatically creates and manages `deleted` and `inserted` tables. You can use these temporary, memory-resident tables to test the effects of certain data modifications and to set conditions for DML trigger actions. AWS SCT can convert the usage of these tables inside DML trigger statements.
+ AWS SCT can add linked servers to the object tree when Amazon RDS is the target.
+ When migrating from Microsoft SQL Server to PostgreSQL, the built-in SUSER\$1SNAME function is converted as follows:
  + SUSER\$1SNAME – Returns the login name associated with a security identification number (SID).
  + SUSER\$1SNAME(<server\$1user\$1sid>) – Not supported.
  + SUSER\$1SNAME() CURRENT\$1USER – Returns the user name of the current execution context.
  + SUSER\$1SNAME(NULL) – Returns NULL.
+ Converting table-valued functions is supported. Table-valued functions return a table and can take the place of a table in a query.
+ PATINDEX returns the starting position of the first occurrence of a pattern in a specified expression on all valid text and character data types. It returns zeros if the pattern is not found. When converting from SQL Server to Amazon RDS for PostgreSQL, AWS SCT replaces application code that uses PATINDEX with aws\$1sqlserver\$1ext.patindex(<pattern character>, <expression character varying>) .
+ In SQL Server, a user-defined table type is a type that represents the definition of a table structure. You use a user-defined table type to declare table-value parameters for stored procedures or functions. You can also use a user-defined table type to declare table variables that you want to use in a batch or in the body of a stored procedure or function. AWS SCT emulated this type in PostgreSQL by creating a temporary table.

When converting from SQL Server to PostgreSQL, AWS SCT converts SQL Server system objects into recognizable objects in PostgreSQL. The following table shows how the system objects are converted. 

 


| MS SQL Server use cases | PostgreSQL substitution | 
| --- | --- | 
| SYS.SCHEMAS | AWS\$1SQLSERVER\$1EXT.SYS\$1SCHEMAS | 
| SYS.TABLES | AWS\$1SQLSERVER\$1EXT.SYS\$1TABLES | 
| SYS.VIEWS | AWS\$1SQLSERVER\$1EXT.SYS\$1VIEWS | 
| SYS.ALL\$1VIEWS | AWS\$1SQLSERVER\$1EXT.SYS\$1ALL\$1VIEWS | 
| SYS.TYPES | AWS\$1SQLSERVER\$1EXT.SYS\$1TYPES | 
| SYS.COLUMNS | AWS\$1SQLSERVER\$1EXT.SYS\$1COLUMNS | 
| SYS.ALL\$1COLUMNS | AWS\$1SQLSERVER\$1EXT.SYS\$1ALL\$1COLUMNS | 
| SYS.FOREIGN\$1KEYS | AWS\$1SQLSERVER\$1EXT.SYS\$1FOREIGN\$1KEYS | 
| SYS.SYSFOREIGNKEYS | AWS\$1SQLSERVER\$1EXT.SYS\$1SYSFOREIGNKEYS | 
| SYS.FOREIGN\$1KEY\$1COLUMNS | AWS\$1SQLSERVER\$1EXT.SYS\$1FOREIGN\$1KEY\$1COLUMNS | 
| SYS.KEY\$1CONSTRAINTS | AWS\$1SQLSERVER\$1EXT.SYS\$1KEY\$1CONSTRAINTS | 
| SYS.IDENTITY\$1COLUMNS | AWS\$1SQLSERVER\$1EXT.SYS\$1IDENTITY\$1COLUMNS | 
| SYS.PROCEDURES | AWS\$1SQLSERVER\$1EXT.SYS\$1PROCEDURES | 
| SYS.INDEXES | AWS\$1SQLSERVER\$1EXT.SYS\$1INDEXES | 
| SYS.SYSINDEXES | AWS\$1SQLSERVER\$1EXT.SYS\$1SYSINDEXES | 
| SYS.OBJECTS | AWS\$1SQLSERVER\$1EXT.SYS\$1OBJECTS | 
| SYS.ALL\$1OBJECTS | AWS\$1SQLSERVER\$1EXT.SYS\$1ALL\$1OBJECTS | 
| SYS.SYSOBJECTS | AWS\$1SQLSERVER\$1EXT.SYS\$1SYSOBJECTS | 
| SYS.SQL\$1MODULES | AWS\$1SQLSERVER\$1EXT.SYS\$1SQL\$1MODULES | 
| SYS.DATABASES | AWS\$1SQLSERVER\$1EXT.SYS\$1DATABASES | 
| INFORMATION\$1SCHEMA.SCHEMATA  | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1SCHEMATA | 
| INFORMATION\$1SCHEMA.VIEWS | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1VIEWS | 
| INFORMATION\$1SCHEMA.TABLES | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1TABLES | 
| INFORMATION\$1SCHEMA.COLUMNS | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1COLUMNS | 
| INFORMATION\$1SCHEMA.CHECK\$1CONSTRAINTS | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1CHECK\$1CONSTRAINTS | 
| INFORMATION\$1SCHEMA.REFERENTIAL\$1CONSTRAINTS | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1REFERENTIAL\$1CONSTRAINTS | 
| INFORMATION\$1SCHEMA.TABLE\$1CONSTRAINTS | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1TABLE\$1CONSTRAINTS | 
| INFORMATION\$1SCHEMA.KEY\$1COLUMN\$1USAGE | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1KEY\$1COLUMN\$1USAGE | 
| INFORMATION\$1SCHEMA.CONSTRAINT\$1TABLE\$1USAGE | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1CONSTRAINT\$1TABLE\$1USAGE  | 
| INFORMATION\$1SCHEMA.CONSTRAINT\$1COLUMN\$1USAGE | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1CONSTRAINT\$1COLUMN\$1USAGE  | 
| INFORMATION\$1SCHEMA.ROUTINES | AWS\$1SQLSERVER\$1EXT.INFORMATION\$1SCHEMA\$1ROUTINES | 
| SYS.SYSPROCESSES | AWS\$1SQLSERVER\$1EXT.SYS\$1SYSPROCESSES | 
| sys.system\$1objects | AWS\$1SQLSERVER\$1EXT.SYS\$1SYSTEM\$1OBJECTS | 

# Using an AWS SCT extension pack to emulate SQL Server Agent in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Agent"></a>

SQL Server Agent is a Microsoft Windows service that runs SQL Server jobs. SQL Server Agent runs jobs on a schedule, in response to a specific event, or on demand. For more information about SQL Server Agent, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/ssms/agent/sql-server-agent?view=sql-server-ver15).

PostgreSQL doesn't have an equivalent for SQL Server Agent. To emulate the SQL Server Agent features, AWS SCT creates an extension pack. This extension pack uses AWS Lambda and Amazon CloudWatch. AWS Lambda implements the interface that you use to manage schedules and run jobs. Amazon CloudWatch maintains the schedule rules.

AWS Lambda and Amazon CloudWatch use a JSON parameter to interact. This JSON parameter has the following structure.

```
{
    "mode": mode,
    "parameters": {
        list of parameters
    },
    "callback": procedure name
}
```

In the preceding example, *`mode`* is the type of the task and `list of parameters` is a set of parameters that depend on the type of the task. Also, `procedure name` is the name of the procedure that runs after the task is completed.

AWS SCT uses one Lambda function to control and run jobs. The CloudWatch rule starts the run of the job and provides the necessary information to start the job. When the CloudWatch rule triggers, it starts the Lambda function using the parameters from the rule.

To create a simple job that calls a procedure, use the following format.

```
{
    "mode": "run_job",
    "parameters": {
        "vendor": "mysql",
        "cmd": "lambda_db.nightly_job"
    }
}
```

To create a job with several steps, use the following format.

```
{
    "mode": "run_job",
    "parameters": {
        "job_name": "Job1",
        "enabled": "true",
        "start_step_id": 1,
        "notify_level_email": [0|1|2|3],
        "notify_email": email,
        "delete_level": [0|1|2|3],
        "job_callback": "ProcCallBackJob(job_name, code, message)",
        "step_callback": "ProcCallBackStep(job_name, step_id, code, message)"
    },
    "steps": [
        {
            "id":1,
            "cmd": "ProcStep1",
            "cmdexec_success_code": 0,
            "on_success_action": [|2|3|4],
            "on_success_step_id": 1,
            "on_fail_action": 0,
            "on_fail_step_id": 0,
            "retry_attempts": number,
            "retry_interval": number
        },
        {
            "id":2,
            "cmd": "ProcStep2",
            "cmdexec_success_code": 0,
            "on_success_action": [1|2|3|4],
            "on_success_step_id": 0,
            "on_fail_action": 0,
            "on_fail_step_id": 0,
            "retry_attempts": number,
            "retry_interval": number
        },
        ...
]
}
```

To emulate the SQL Server Agent behavior in PostgreSQL, the AWS SCT extension pack also creates the following tables and procedures.

## Tables that emulate SQL Server Agent in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Agent.Tables"></a>

To emulate SQL Server Agent, the extension pack uses the following tables:

**sysjobs**  
Stores the information about the jobs.

**sysjobsteps**  
Stores the information about the steps of a job.

**sysschedules**  
Stores the information about the job schedules.

**sysjobschedules**  
Stores the schedule information for individual jobs. 

**sysjobhistory**  
Stores the information about the runs of scheduled jobs.

## Procedures that emulate SQL Server Agent in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Agent.Procedures"></a>

To emulate SQL Server Agent, the extension pack uses the following procedures:

**sp\$1add\$1job**  
Adds a new job.

**sp\$1add\$1jobstep**  
Adds a step to a job.

**sp\$1add\$1schedule**  
Creates a new schedule rule in Amazon CloudWatch. You can use this schedule with any number of jobs.

**sp\$1attach\$1schedule**  
Sets a schedule for the selected job.

**sp\$1add\$1jobschedule**  
Creates a schedule rule for a job in Amazon CloudWatch and sets the target for this rule.

**sp\$1update\$1job**  
Updates the attributes of the previously created job.

**sp\$1update\$1jobstep**  
Updates the attributes of the step in a job.

**sp\$1update\$1schedule**  
Updates the attributes of a schedule rule in Amazon CloudWatch.

**sp\$1update\$1jobschedule**  
Updates the attributes of the schedule for the specified job.

**sp\$1delete\$1job**  
Deletes a job.

**sp\$1delete\$1jobstep**  
Deletes a job step from a job.

**sp\$1delete\$1schedule**  
Deletes a schedule.

**sp\$1delete\$1jobschedule**  
Deletes the schedule rule for the specified job from Amazon CloudWatch.

**sp\$1detach\$1schedule**  
Removes an association between a schedule and a job.

**get\$1jobs, update\$1job**  
Internal procedures that interact with AWS Elastic Beanstalk.

**sp\$1verify\$1job\$1date, sp\$1verify\$1job\$1time, sp\$1verify\$1job, sp\$1verify\$1jobstep, sp\$1verify\$1schedule, sp\$1verify\$1job\$1identifiers, sp\$1verify\$1schedule\$1identifiers**  
Internal procedures that check settings.

## Syntax for procedures that emulate SQL Server Agent in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Agent.Syntax"></a>

The `aws_sqlserver_ext.sp_add_job` procedure in the extension pack emulates the `msdb.dbo.sp_add_job` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-add-job-transact-sql?view=sql-server-ver15). 

```
par_job_name varchar,
par_enabled smallint = 1,
par_description varchar = NULL::character varying,
par_start_step_id integer = 1,
par_category_name varchar = NULL::character varying,
par_category_id integer = NULL::integer,
par_owner_login_name varchar = NULL::character varying,
par_notify_level_eventlog integer = 2,
par_notify_level_email integer = 0,
par_notify_level_netsend integer = 0,
par_notify_level_page integer = 0,
par_notify_email_operator_name varchar = NULL::character varying,
par_notify_netsend_operator_name varchar = NULL::character varying,
par_notify_page_operator_name varchar = NULL::character varying,
par_delete_level integer = 0,
inout par_job_id integer = NULL::integer,
par_originating_server varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sp_add_jobstep` procedure in the extension pack emulates the `msdb.dbo.sp_add_jobstep` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-add-jobstep-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer,
par_job_name varchar = NULL::character varying,
par_step_id integer = NULL::integer,
par_step_name varchar = NULL::character varying,
par_subsystem varchar = 'TSQL'::bpchar,
par_command text = NULL::text,
par_additional_parameters text = NULL::text,
par_cmdexec_success_code integer = 0,
par_on_success_action smallint = 1,
par_on_success_step_id integer = 0,
par_on_fail_action smallint = 2,
par_on_fail_step_id integer = 0,
par_server varchar = NULL::character varying,
par_database_name varchar = NULL::character varying,
par_database_user_name varchar = NULL::character varying,
par_retry_attempts integer = 0,
par_retry_interval integer = 0,
par_os_run_priority integer = 0,
par_output_file_name varchar = NULL::character varying,
par_flags integer = 0,
par_proxy_id integer = NULL::integer,
par_proxy_name varchar = NULL::character varying,
inout par_step_uid char = NULL::bpchar,
out returncode integer
```

The `aws_sqlserver_ext.sp_add_schedule` procedure in the extension pack emulates the `msdb.dbo.sp_add_schedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-add-schedule-transact-sql?view=sql-server-ver15). 

```
par_schedule_name varchar,
par_enabled smallint = 1,
par_freq_type integer = 0,
par_freq_interval integer = 0,
par_freq_subday_type integer = 0,
par_freq_subday_interval integer = 0,
par_freq_relative_interval integer = 0,
par_freq_recurrence_factor integer = 0,
par_active_start_date integer = NULL::integer,
par_active_end_date integer = 99991231,
par_active_start_time integer = 0,
par_active_end_time integer = 235959,
par_owner_login_name varchar = NULL::character varying,
*inout par_schedule_uid char = NULL::bpchar,*
inout par_schedule_id integer = NULL::integer,
par_originating_server varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sp_attach_schedule` procedure in the extension pack emulates the `msdb.dbo.sp_attach_schedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-attach-schedule-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer,
par_job_name varchar = NULL::character varying,
par_schedule_id integer = NULL::integer,
par_schedule_name varchar = NULL::character varying,
par_automatic_post smallint = 1,
out returncode integer
```

The `aws_sqlserver_ext.sp_add_jobschedule` procedure in the extension pack emulates the `msdb.dbo.sp_add_jobschedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-add-jobschedule-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer,
par_job_name varchar = NULL::character varying,
par_name varchar = NULL::character varying,
par_enabled smallint = 1,
par_freq_type integer = 1,
par_freq_interval integer = 0,
par_freq_subday_type integer = 0,
par_freq_subday_interval integer = 0,
par_freq_relative_interval integer = 0,
par_freq_recurrence_factor integer = 0,
par_active_start_date integer = NULL::integer,
par_active_end_date integer = 99991231,
par_active_start_time integer = 0,
par_active_end_time integer = 235959,
inout par_schedule_id integer = NULL::integer,
par_automatic_post smallint = 1,
inout par_schedule_uid char = NULL::bpchar,
out returncode integer
```

The `aws_sqlserver_ext.sp_delete_job` procedure in the extension pack emulates the `msdb.dbo.sp_delete_job` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-delete-job-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer,
par_job_name varchar = NULL::character varying,
par_originating_server varchar = NULL::character varying,
par_delete_history smallint = 1,
par_delete_unused_schedule smallint = 1,
out returncode integer
```

The `aws_sqlserver_ext.sp_delete_jobstep` procedure in the extension pack emulates the `msdb.dbo.sp_delete_jobstep` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-delete-jobsteplog-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer,
par_job_name varchar = NULL::character varying,
par_step_id integer = NULL::integer,
out returncode integer
```

The `aws_sqlserver_ext.sp_delete_jobschedule` procedure in the extension pack emulates the `msdb.dbo.sp_delete_jobschedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-delete-jobschedule-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer,
par_job_name varchar = NULL::character varying,
par_name varchar = NULL::character varying,
par_keep_schedule integer = 0,
par_automatic_post smallint = 1,
out returncode integer
```

The `aws_sqlserver_ext.sp_delete_schedule` procedure in the extension pack emulates the `msdb.dbo.sp_delete_schedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-delete-schedule-transact-sql?view=sql-server-ver15). 

```
par_schedule_id integer = NULL::integer,
par_schedule_name varchar = NULL::character varying,
par_force_delete smallint = 0,
par_automatic_post smallint = 1,
out returncode integer
```

The `aws_sqlserver_ext.sp_detach_schedule` procedure in the extension pack emulates the `msdb.dbo.sp_detach_schedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-detach-schedule-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer,
par_job_name varchar = NULL::character varying,
par_schedule_id integer = NULL::integer,
par_schedule_name varchar = NULL::character varying,
par_delete_unused_schedule smallint = 0,
par_automatic_post smallint = 1,
out returncode integer
```

The `aws_sqlserver_ext.sp_update_job` procedure in the extension pack emulates the `msdb.dbo.sp_update_job` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-update-job-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer
par_job_name varchar = NULL::character varying
par_new_name varchar = NULL::character varying
par_enabled smallint = NULL::smallint
par_description varchar = NULL::character varying
par_start_step_id integer = NULL::integer
par_category_name varchar = NULL::character varying
par_owner_login_name varchar = NULL::character varying
par_notify_level_eventlog integer = NULL::integer
par_notify_level_email integer = NULL::integer
par_notify_level_netsend integer = NULL::integer
par_notify_level_page integer = NULL::integer
par_notify_email_operator_name varchar = NULL::character varying
par_notify_netsend_operator_name varchar = NULL::character varying
par_notify_page_operator_name varchar = NULL::character varying
par_delete_level integer = NULL::integer
par_automatic_post smallint = 1
out returncode integer
```

The `aws_sqlserver_ext.sp_update_jobschedule` procedure in the extension pack emulates the `msdb.dbo.sp_update_jobschedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-update-jobschedule-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer
par_job_name varchar = NULL::character varying
par_name varchar = NULL::character varying
par_new_name varchar = NULL::character varying
par_enabled smallint = NULL::smallint
par_freq_type integer = NULL::integer
par_freq_interval integer = NULL::integer
par_freq_subday_type integer = NULL::integer
par_freq_subday_interval integer = NULL::integer
par_freq_relative_interval integer = NULL::integer
par_freq_recurrence_factor integer = NULL::integer
par_active_start_date integer = NULL::integer
par_active_end_date integer = NULL::integer
par_active_start_time integer = NULL::integer
                par_active_end_time integer = NULL::integer
par_automatic_post smallint = 1
out returncode integer
```

The `aws_sqlserver_ext.sp_update_jobstep` procedure in the extension pack emulates the `msdb.dbo.sp_update_jobstep` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-update-jobstep-transact-sql?view=sql-server-ver15). 

```
par_job_id integer = NULL::integer
par_job_name varchar = NULL::character varying
par_step_id integer = NULL::integer
par_step_name varchar = NULL::character varying
par_subsystem varchar = NULL::character varying
par_command text = NULL::text
par_additional_parameters text = NULL::text
par_cmdexec_success_code integer = NULL::integer
par_on_success_action smallint = NULL::smallint
par_on_success_step_id integer = NULL::integer
par_on_fail_action smallint = NULL::smallint
par_on_fail_step_id integer = NULL::integer
par_server varchar = NULL::character varying
par_database_name varchar = NULL::character varying
par_database_user_name varchar = NULL::character varying
par_retry_attempts integer = NULL::integer
par_retry_interval integer = NULL::integer
par_os_run_priority integer = NULL::integer
par_output_file_name varchar = NULL::character varying
par_flags integer = NULL::integer
par_proxy_id integer = NULL::integer
par_proxy_name varchar = NULL::character varying
out returncode integer
```

The `aws_sqlserver_ext.sp_update_schedule` procedure in the extension pack emulates the `msdb.dbo.sp_update_schedule` procedure. For more information about the source SQL Server Agent procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-update-schedule-transact-sql?view=sql-server-ver15). 

```
par_schedule_id integer = NULL::integer
par_name varchar = NULL::character varying
par_new_name varchar = NULL::character varying
par_enabled smallint = NULL::smallint
par_freq_type integer = NULL::integer
par_freq_interval integer = NULL::integer
par_freq_subday_type integer = NULL::integer
par_freq_subday_interval integer = NULL::integer
par_freq_relative_interval integer = NULL::integer
par_freq_recurrence_factor integer = NULL::integer
par_active_start_date integer = NULL::integer
par_active_end_date integer = NULL::integer
par_active_start_time integer = NULL::integer
par_active_end_time integer = NULL::integer
par_owner_login_name varchar = NULL::character varying
par_automatic_post smallint = 1
out returncode integer
```

## Examples for using procedures that emulate SQL Server Agent in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Agent.Examples"></a>

To add a new job, use the `aws_sqlserver_ext.sp_add_job` procedure as shown following.

```
SELECT * FROM aws_sqlserver_ext.sp_add_job (
    par_job_name := 'test_job',
    par_enabled := 1::smallint,
    par_start_step_id := 1::integer,
    par_category_name := '[Uncategorized (Local)]',
    par_owner_login_name := 'sa');
```

To add a new job step, use the `aws_sqlserver_ext.sp_add_jobstep` procedure as shown following.

```
SELECT * FROM aws_sqlserver_ext.sp_add_jobstep (
    par_job_name := 'test_job',
    par_step_id := 1::smallint,
    par_step_name := 'test_job_step1',
    par_subsystem := 'TSQL',
    par_command := 'EXECUTE [dbo].[PROC_TEST_JOB_STEP1];',
    par_server := NULL,
    par_database_name := 'GOLD_TEST_SS');
```

To add a simple schedule, use the `aws_sqlserver_ext.sp_add_schedule` procedure as shown following.

```
SELECT * FROM aws_sqlserver_ext.sp_add_schedule(
    par_schedule_name := 'RunOnce',
    par_freq_type := 1,
    par_active_start_time := 233000);
```

To set a schedule for a job, use the `aws_sqlserver_ext.sp_attach_schedule` procedure as shown following.

```
SELECT * FROM aws_sqlserver_ext.sp_attach_schedule (
    par_job_name := 'test_job',
    par_schedule_name := 'NightlyJobs');
```

To create a schedule for a job, use the `aws_sqlserver_ext.sp_add_jobschedule` procedure as shown following.

```
SELECT * FROM aws_sqlserver_ext.sp_add_jobschedule (
    par_job_name := 'test_job2',
    par_name := 'test_schedule2',
    par_enabled := 1::smallint,
    par_freq_type := 4,
    par_freq_interval := 1,
    par_freq_subday_type := 4,
    par_freq_subday_interval := 1,
    par_freq_relative_interval := 0,
    par_freq_recurrence_factor := 0,
    par_active_start_date := 20100801,
    par_active_end_date := 99991231,
    par_active_start_time := 0,
    par_active_end_time := 0);
```

## Use case examples for emulating SQL Server Agent in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Agent.UseCases"></a>

If your source database code uses SQL Server Agent to run jobs, you can use the SQL Server to PostgreSQL extension pack for AWS SCT to convert this code to PostgreSQL. The extension pack uses AWS Lambda functions to emulate the behavior of SQL Server Agent.

You can create a new AWS Lambda function or register an existing function.

**To create a new AWS Lambda function**

1. In AWS SCT, in the target database tree, open the context (right-click) menu, choose **Apply extension pack for**, and then choose **PostgreSQL**. 

   The extension pack wizard appears. 

1. On the **SQL Server Agent emulation service** tab, do the following: 
   + Choose **Create an AWS Lambda function**.
   + For **Database login**, enter the name of the target database user.
   + For **Database password**, enter the password for the user name that you entered on the preceding step.
   + For **Python library folder**, enter the path to your Python library folder.
   + Choose **Create AWS Lambda function**, and then choose **Next**.

**To register an AWS Lambda function that you deployed earlier**
+ Run the following script on your target database.

  ```
  SELECT
      FROM aws_sqlserver_ext.set_service_setting(
          p_service := 'JOB', 
          p_setting := 'LAMBDA_ARN', 
          p_value := ARN)
  ```

  In the preceding example, *`ARN`* is the Amazon Resource Name (ARN) of the deployed AWS Lambda function.

The following example creates a simple task that consists of one step. Every five minutes, this task runs the previously created `job_example` function. This function inserts records into the `job_example_table` table.

**To create this simple task**

1. Create a job using the `aws_sqlserver_ext.sp_add_job` function as shown following.

   ```
   SELECT
       FROM aws_sqlserver_ext.sp_add_job (
           par_job_name := 'test_simple_job');
   ```

1. Create a job step using the `aws_sqlserver_ext.sp_add_jobstep` function as shown following.

   ```
   SELECT
       FROM aws_sqlserver_ext.sp_add_jobstep (
           par_job_name := 'test_simple_job', 
           par_step_name := 'test_simple_job_step1', 
           par_command := 'PERFORM job_simple_example;');
   ```

   The job step specifies what the function does.

1. Create a scheduler for the job using the `aws_sqlserver_ext.sp_add_jobschedule` function as shown following.

   ```
   SELECT
       FROM aws_sqlserver_ext.sp_add_jobschedule (
           par_job_name := 'test_simple_job', 
           par_name := 'test_schedule', 
           par_freq_type := 4, /* Daily */
           par_freq_interval := 1, /* frequency_interval is unused */
           par_freq_subday_type := 4, /* Minutes */
           par_freq_subday_interval := 5 /* 5 minutes */);
   ```

   The job step specifies what the function does.

To delete this job, use the `aws_sqlserver_ext.sp_delete_job` function as shown following.

```
PERFORM aws_sqlserver_ext.sp_delete_job(
    par_job_name := 'PeriodicJob1'::character varying,
    par_delete_history := 1::smallint,
    par_delete_unused_schedule := 1::smallint);
```

# Using an AWS SCT extension pack to emulate SQL Server Database Mail in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail"></a>

You can use SQL Server Database Mail to send email messages to users from the SQL Server Database Engine or Azure SQL Managed Instance. These email messages can contain query results or include files from any resource on your network. For more information about SQL Server Database Mail, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/database-mail/database-mail?view=sql-server-ver15).

PostgreSQL doesn't have an equivalent for SQL Server Database Mail. To emulate the SQL Server Database Mail features, AWS SCT creates an extension pack. This extension pack uses AWS Lambda and Amazon Simple Email Service (Amazon SES). AWS Lambda provides users with an interface to interact with Amazon SES email sending service. To set up this interaction, add the Amazon Resource Name (ARN) of your Lambda function. 

For a new email account, use the following command.

```
do
$$
begin
PERFORM sysmail_add_account_sp (
    par_account_name :='your_account_name',
    par_email_address := 'your_account_email',
    par_display_name := 'your_account_display_name',
    par_mailserver_type := 'AWSLAMBDA'
    par_mailserver_name := 'ARN'
);
end;
$$ language plpgsql;
```

To add the ARN of your Lambda function to the existing email account, use the following command.

```
do
$$
begin
PERFORM sysmail_update_account_sp (
    par_account_name :='existind_account_name',
    par_mailserver_type := 'AWSLAMBDA'
    par_mailserver_name := 'ARN'
);
end;
$$ language plpgsql;
```

In the preceding examples, *`ARN`* is the ARN of your Lambda function.

To emulate the SQL Server Database Mail behavior in PostgreSQL, the AWS SCT extension pack uses the following tables, views, and procedures.

## Tables that emulate SQL Server Database Mail in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail.Tables"></a>

To emulate SQL Server Database Mail, the extension pack uses the following tables:

**sysmail\$1account**  
Stores the information about the email accounts.

**sysmail\$1profile**  
Stores the information about the user profiles.

**sysmail\$1server**  
Stores the information about the email servers.

**sysmail\$1mailitems**  
Stores the list of the email messages.

**sysmail\$1attachments**  
Contains one row for each email attachment.

**sysmail\$1log**  
Stores the service information about sending email messages.

**sysmail\$1profileaccount**  
Stores the information about the user profiles and email accounts.

## Views that emulate SQL Server Database Mail in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail.Views"></a>

To emulate SQL Server Database Mail, AWS SCT creates the following views in the PostgreSQL database to ensure compatibility. The extension pack doesn't use them, but your converted code can query these views.

**sysmail\$1allitems**  
Includes a list of all emails.

**sysmail\$1faileditems**  
Includes a list of emails that couldn't be sent.

**sysmail\$1sentitems**  
Includes a list of sent emails.

**sysmail\$1unsentitems**  
Includes a list of emails that aren't sent yet.

**sysmail\$1mailattachments**  
Includes a list of attached files.

## Procedures that emulate SQL Server Database Mail in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail.Procedures"></a>

To emulate SQL Server Database Mail, the extension pack uses the following procedures:

**sp\$1send\$1dbmail**  
Sends an email to the specified recipients.

**sysmail\$1add\$1profile\$1sp**  
Creates a new user profile.

**sysmail\$1add\$1account\$1sp**  
Creates a new email account that stores such information as Simple Mail Transfer Protocol (SMTP) credentials, and so on.

**sysmail\$1add\$1profileaccount\$1sp**  
Adds an email account to the specified user profile.

**sysmail\$1update\$1profile\$1sp**  
Changes the attributes of the user profile such as description, name, and so on.

**sysmail\$1update\$1account\$1sp**  
Changes the information in the existing email account.

**sysmail\$1update\$1profileaccount\$1sp**  
Updates the email account information in the specified user profile.

**sysmail\$1delete\$1profileaccount\$1sp**  
Removes an email account from the specified user profile.

**sysmail\$1delete\$1account\$1sp**  
Deletes the email account.

**sysmail\$1delete\$1profile\$1sp**  
Deletes the user profile.

**sysmail\$1delete\$1mailitems\$1sp**  
Deletes emails from internal tables.

**sysmail\$1help\$1profile\$1sp**  
Displays information about the user profile.

**sysmail\$1help\$1account\$1sp**  
Displays information about the email account.

**sysmail\$1help\$1profileaccount\$1sp**  
Displays information about email accounts associated with the user profile.

**sysmail\$1dbmail\$1json**  
An internal procedure that generates JSON requests for AWS Lambda functions.

**sysmail\$1verify\$1profile\$1sp, sysmail\$1verify\$1account\$1sp, sysmail\$1verify\$1addressparams\$1sp**  
Internal procedures that check settings.

**sp\$1get\$1dbmail, sp\$1set\$1dbmail, sysmail\$1dbmail\$1xml**  
Deprecated internal procedures.

## Syntax for procedures that emulate SQL Server Database Mail in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail.Syntax"></a>

The `aws_sqlserver_ext.sp_send_dbmail` procedure in the extension pack emulates the `msdb.dbo.sp_send_dbmail` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql?view=sql-server-ver15).

```
par_profile_name varchar = NULL::character varying,
par_recipients text = NULL::text,
par_copy_recipients text = NULL::text,
par_blind_copy_recipients text = NULL::text,
par_subject varchar = NULL::character varying,
par_body text = NULL::text,
par_body_format varchar = NULL::character varying,
par_importance varchar = 'NORMAL'::character varying,
par_sensitivity varchar = 'NORMAL'::character varying,
par_file_attachments text = NULL::text,
par_query text = NULL::text,
par_execute_query_database varchar = NULL::character varying,
par_attach_query_result_as_file smallint = 0,
par_query_attachment_filename varchar = NULL::character varying,
par_query_result_header smallint = 1,
par_query_result_width integer = 256,
par_query_result_separator VARCHAR = ' '::character varying,
par_exclude_query_output smallint = 0,
par_append_query_error smallint = 0,
par_query_no_truncate smallint = 0,
par_query_result_no_padding smallint = 0,
out par_mailitem_id integer,
par_from_address text = NULL::text,
par_reply_to text = NULL::text,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_delete_mailitems_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_delete_mailitems_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-delete-mailitems-sp-transact-sql?view=sql-server-ver15).

```
par_sent_before timestamp = NULL::timestamp without time zone,
par_sent_status varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_add_profile_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_add_profile_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-profile-sp-transact-sql?view=sql-server-ver15).

```
par_profile_name varchar,
par_description varchar = NULL::character varying,
out par_profile_id integer,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_add_account_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_add_account_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-account-sp-transact-sql?view=sql-server-ver15).

```
par_account_name varchar
par_email_address varchar
par_display_name varchar = NULL::character varying
par_replyto_address varchar = NULL::character varying
par_description varchar = NULL::character varying
par_mailserver_name varchar = NULL::character varying
par_mailserver_type varchar = 'SMTP'::bpchar
par_port integer = 25
par_username varchar = NULL::character varying
par_password varchar = NULL::character varying
par_use_default_credentials smallint = 0
par_enable_ssl smallint = 0
out par_account_id integer
out returncode integer
```

The `aws_sqlserver_ext.sysmail_add_profileaccount_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_add_profileaccount_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-profileaccount-sp-transact-sql?view=sql-server-ver15).

```
par_profile_id integer = NULL::integer,
par_profile_name varchar = NULL::character varying,
par_account_id integer = NULL::integer,
par_account_name varchar = NULL::character varying,
par_sequence_number integer = NULL::integer,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_help_profile_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_help_profile_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-help-profile-sp-transact-sql?view=sql-server-ver15).

```
par_profile_id integer = NULL::integer,
par_profile_name varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_update_profile_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_update_profile_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-update-profile-sp-transact-sql?view=sql-server-ver15).

```
par_profile_id integer = NULL::integer,
par_profile_name varchar = NULL::character varying,
par_description varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_delete_profile_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_delete_profile_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-delete-profile-sp-transact-sql?view=sql-server-ver15).

```
par_profile_id integer = NULL::integer,
par_profile_name varchar = NULL::character varying,
par_force_delete smallint = 1,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_help_account_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_help_account_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-help-account-sp-transact-sql?view=sql-server-ver15).

```
par_account_id integer = NULL::integer,
par_account_name varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_update_account_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_update_account_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-update-account-sp-transact-sql?view=sql-server-ver15).

```
par_account_id integer = NULL::integer,
par_account_name varchar = NULL::character varying,
par_email_address varchar = NULL::character varying,
par_display_name varchar = NULL::character varying,
par_replyto_address varchar = NULL::character varying,
par_description varchar = NULL::character varying,
par_mailserver_name varchar = NULL::character varying,
par_mailserver_type varchar = NULL::character varying,
par_port integer = NULL::integer,
par_username varchar = NULL::character varying,
par_password varchar = NULL::character varying,
par_use_default_credentials smallint = NULL::smallint,
par_enable_ssl smallint = NULL::smallint,
par_timeout integer = NULL::integer,
par_no_credential_change smallint = NULL::smallint,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_delete_account_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_delete_account_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-delete-account-sp-transact-sql?view=sql-server-ver15).

```
par_account_id integer = NULL::integer,
par_account_name varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_help_profileaccount_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_help_profileaccount_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-help-profileaccount-sp-transact-sql?view=sql-server-ver15).

```
par_profile_id integer = NULL::integer,
par_profile_name varchar = NULL::character varying,
par_account_id integer = NULL::integer,
par_account_name varchar = NULL::character varying,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_update_profileaccount_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_update_profileaccount_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-update-profileaccount-sp-transact-sql?view=sql-server-ver15).

```
par_profile_id integer = NULL::integer,
par_profile_name varchar = NULL::character varying,
par_account_id integer = NULL::integer,
par_account_name varchar = NULL::character varying,
par_sequence_number integer = NULL::integer,
out returncode integer
```

The `aws_sqlserver_ext.sysmail_delete_profileaccount_sp` procedure in the extension pack emulates the `msdb.dbo.sysmail_delete_profileaccount_sp` procedure. For more information about the source SQL Server Database Mail procedure, see [Microsoft technical documentation](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-delete-profileaccount-sp-transact-sql?view=sql-server-ver15).

```
par_profile_id integer = NULL::integer,
par_profile_name varchar = NULL::character varying,
par_account_id integer = NULL::integer,
par_account_name varchar = NULL::character varying,
out returncode integer
```

## Examples for using procedures that emulate SQL Server Database Mail in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail.Examples"></a>

To send an email, use the `aws_sqlserver_ext.sp_send_dbmail` procedure as shown following.

```
PERFORM sp_send_dbmail (
    par_profile_name := 'Administrator',
    par_recipients := 'hello@rusgl.info',
    par_subject := 'Automated Success Message',
    par_body := 'The stored procedure finished'
);
```

The following example shows how to send an email with query results.

```
PERFORM sp_send_dbmail (
    par_profile_name := 'Administrator',
    par_recipients := 'hello@rusgl.info',
    par_subject := 'Account with id = 1',
    par_query := 'SELECT COUNT(*)FROM Account WHERE id = 1'
);
```

The following example shows how to send an email with HTML code.

```
DECLARE var_tableHTML TEXT;
SET var_tableHTML := CONCAT(
    '<H1>Work Order Report</H1>',
    '<table border="1">',
    '<tr><th>Work Order ID</th><th>Product ID</th>',
    '<th>Name</th><th>Order Qty</th><th>Due Date</th>',
    '<th>Expected Revenue</th></tr>',
    '</table>'
);
PERFORM sp_send_dbmail (
    par_recipients := 'hello@rusgl.info',
    par_subject := 'Work Order List',
    par_body := var_tableHTML,
    par_body_format := 'HTML'
);
```

To delete emails, use the `aws_sqlserver_ext.sysmail_delete_mailitems_sp` procedure as shown following.

```
DECLARE var_GETDATE datetime;
SET var_GETDATE = NOW();
PERFORM sysmail_delete_mailitems_sp (
    par_sent_before := var_GETDATE
);
```

The following example shows how to delete the oldest emails.

```
PERFORM sysmail_delete_mailitems_sp (
    par_sent_before := '31.12.2015'
);
```

The following example shows how to delete all emails that can't be sent.

```
PERFORM sysmail_delete_mailitems_sp (
    par_sent_status := 'failed'
);
```

To create a new user profile, use the `aws_sqlserver_ext.sysmail_add_profile_sp` procedure as shown following.

```
PERFORM sysmail_add_profile_sp (
    profile_name := 'Administrator',
    par_description := 'administrative mail'
);
```

The following example shows how to create a new profile and save the unique profile identifier in a variable.

```
DECLARE var_profileId INT;
SELECT par_profile_id
    FROM sysmail_add_profile_sp (
        profile_name := 'Administrator',
        par_description := ' Profile used for administrative mail.')
    INTO var_profileId;
    
SELECT var_profileId;
```

To create a new email account, use the `aws_sqlserver_ext.sysmail_add_account_sp` procedure as shown following.

```
PERFORM sysmail_add_account_sp (
    par_account_name :='Audit Account',
    par_email_address := 'dba@rusgl.info',
    par_display_name := 'Test Automated Mailer',
    par_description := 'Account for administrative e-mail.',
    par_mailserver_type := 'AWSLAMBDA'
    par_mailserver_name := 'arn:aws:lambda:us-west-2:555555555555:function:pg_v3'
);
```

To add an email account to the user profile, use the `aws_sqlserver_ext.sysmail_add_profileaccount_sp` procedure as shown following.

```
PERFORM sysmail_add_profileaccount_sp (
    par_account_name := 'Administrator',
    par_account_name := 'Audit Account',
    par_sequence_number := 1
);
```

## Use case examples for emulating SQL Server Database Mail in PostgreSQL
<a name="CHAP_Source.SQLServer.ToPostgreSQL.ExtensionPack.Mail.UseCases"></a>

If your source database code uses SQL Server Database Mail to send emails, you can use the AWS SCT extension pack to convert this code to PostgreSQL.

**To send an email from your PostgreSQL database**

1. Create and configure your AWS Lambda function.

1. Apply the AWS SCT extension pack.

1. Create a user profile using the `sysmail_add_profile_sp` function as shown following.

1. Create an email account using the `sysmail_add_account_sp` function as shown following.

1. Add this email account to your user profile using the `sysmail_add_profileaccount_sp` function as shown following.

   ```
   CREATE OR REPLACE FUNCTION aws_sqlserver_ext.
   proc_dbmail_settings_msdb()
   RETURNS void
   AS
   $BODY$
   BEGIN
   PERFORM aws_sqlserver_ext.sysmail_add_profile_sp(
       par_profile_name := 'Administrator',
       par_description := 'administrative mail'
   );
   PERFORM aws_sqlserver_ext.sysmail_add_account_sp(
       par_account_name := 'Audit Account',
       par_description := 'Account for administrative e-mail.',
       par_email_address := 'dba@rusgl.info',
       par_display_name := 'Test Automated Mailer',
       par_mailserver_type := 'AWSLAMBDA'
       par_mailserver_name := 'your_ARN'
   );
   PERFORM aws_sqlserver_ext.sysmail_add_profileaccount_sp(
       par_profile_name := 'Administrator',
       par_account_name := 'Audit Account',
       par_sequence_number := 1
   );
   END;
   $BODY$
   LANGUAGE plpgsql;
   ```

1. Send an email using the `sp_send_dbmail` function as shown following.

   ```
   CREATE OR REPLACE FUNCTION aws_sqlserver_ext.
   proc_dbmail_send_msdb()
   RETURNS void
   AS
   $BODY$
   BEGIN
   PERFORM aws_sqlserver_ext.sp_send_dbmail(
       par_profile_name := 'Administrator',
       par_recipients := 'hello@rusgl.info',
       par_body := 'The stored procedure finished',
       par_subject := 'Automated Success Message'
   );
   END;
   $BODY$
   LANGUAGE plpgsql;
   ```

To view the information about all user profiles, use the `sysmail_help_profile_sp` procedure as shown following.

```
SELECT FROM aws_sqlserver_ext.sysmail_help_profile_sp();
```

The following example displays the information about the specific user profile.

```
select from aws_sqlserver_ext.sysmail_help_profile_sp(par_profile_id := 1);
select from aws_sqlserver_ext.sysmail_help_profile_sp(par_profile_name := 'Administrator');
```

To view the information about all email accounts, use the `sysmail_help_account_sp` procedure as shown following.

```
select from aws_sqlserver_ext.sysmail_help_account_sp();
```

The following example displays the information about the specific email account.

```
select from aws_sqlserver_ext.sysmail_help_account_sp(par_account_id := 1);
select from aws_sqlserver_ext.sysmail_help_account_sp(par_account_name := 'Audit Account');
```

To view the information about all email accounts that are associated with the user profiles, use the `sysmail_help_profileaccount_sp` procedure as shown following.

```
select from aws_sqlserver_ext.sysmail_help_profileaccount_sp();
```

The following example filters the records by identifier, profile name, or account name.

```
select from aws_sqlserver_ext.sysmail_help_profileaccount_sp(par_profile_id := 1);
select from aws_sqlserver_ext.sysmail_help_profileaccount_sp(par_profile_id := 1, par_account_id := 1);
select from aws_sqlserver_ext.sysmail_help_profileaccount_sp(par_profile_name := 'Administrator');
select from aws_sqlserver_ext.sysmail_help_profileaccount_sp(par_account_name := 'Audit Account');
```

To change the user profile name or description, use the `sysmail_update_profile_sp` procedure as shown following.

```
select aws_sqlserver_ext.sysmail_update_profile_sp(
    par_profile_id := 2,
    par_profile_name := 'New profile name'
);
```

To change the email account settings, use the `ysmail_update_account_sp` procedure as shown following.

```
select from aws_sqlserver_ext.sysmail_update_account_sp (
    par_account_name := 'Audit Account',
    par_mailserver_name := 'arn:aws:lambda:region:XXXXXXXXXXXX:function:func_test',
    par_mailserver_type := 'AWSLAMBDA'
);
```

# Migrating from SQL Server to Amazon RDS for SQL Server with AWS Schema Conversion Tool
<a name="CHAP_Source.SQLServer.ToRDSSQLServer"></a>

Some things to consider when migrating SQL Server schema and code to Amazon RDS for SQL Server: 
+ AWS SCT can convert SQL Server Agent to provide schedules, alerts, and jobs on an Amazon RDS for SQL Server DB instance. After conversion, you can use an Amazon RDS for SQL Server DB instance with SQL Server Reporting Services (SSRS), SQL Server Analysis Services (SSAS), and SQL Server Integration Services (SSIS).
+ Amazon RDS currently doesn’t support SQL Server Service Broker or additional T-SQL endpoints that require you to run the CREATE ENDPOINT command.
+ Amazon RDS has limited support for linked servers. When converting SQL Server application code that uses linked servers, AWS SCT converts the application code. However, make sure to review the behavior of objects that use linked servers before you run the converted code.
+ Always on is used.
+ The AWS SCT assessment report provides server metrics for the conversion. These metrics about your SQL Server instance include the following:
  + Data mirroring is used.
  + SQL Server Log Shipping is configured.
  + Failover cluster is used.
  + Database Mail is configured. 
  + Full Text Search Service is used. Amazon RDS for SQL Server has a limited full text search, and it does not support semantic search.
  + Data Quality Service (DQS) is installed. Amazon RDS doesn't support DQS so we recommend that you install SQL Server on an Amazon EC2 instance.

## Privileges for RDS for SQL Server as a target
<a name="CHAP_Source.SQLServer.ToRDSSQLServer.ConfigureTarget"></a>

To migrate to RDS for SQL Server, create a database user and then grant the required privileges for each database. You can use the following code example.

```
CREATE LOGIN user_name WITH PASSWORD 'your_password';
                
USE db_name
CREATE USER user_name FOR LOGIN user_name
GRANT VIEW DEFINITION TO user_name
GRANT VIEW DATABASE STATE TO user_name
GRANT CREATE SCHEMA TO user_name;
GRANT CREATE TABLE TO user_name;
GRANT CREATE VIEW TO user_name;
GRANT CREATE TYPE TO user_name;
GRANT CREATE DEFAULT TO user_name;
GRANT CREATE FUNCTION TO user_name;
GRANT CREATE PROCEDURE TO user_name;
GRANT CREATE ASSEMBLY TO user_name;
GRANT CREATE AGGREGATE TO user_name;
GRANT CREATE FULLTEXT CATALOG TO user_name;
GRANT CREATE SYNONYM TO user_name;
GRANT CREATE XML SCHEMA COLLECTION TO user_name;
```

In the preceding example, replace *user\$1name* with the name of your user. Then, replace *db\$1name* with the name of your target database. Finally, replace *your\$1password* with a secure password.

# Data warehouse sources for AWS Schema Conversion Tool
<a name="CHAP_Source-Data-Warehouses"></a>

AWS SCT can convert schemas for the following source data warehouses to a supported target. For information about permissions, connections, and what AWS SCT can convert for use with the target database or data warehouse, see details in the following.

**Topics**
+ [Connecting Amazon Redshift with the AWS Schema Conversion Tool](CHAP_Source.Redshift.md)
+ [Connecting Azure Synapse Analytics with AWS Schema Conversion Tool](CHAP_Source.AzureSynapse.md)
+ [Connecting to Google BigQuery with AWS Schema Conversion Tool](CHAP_Source.BigQuery.md)
+ [Connecting Greenplum Database with AWS Schema Conversion Tool](CHAP_Source.Greenplum.md)
+ [Connecting to Netezza with AWS Schema Conversion Tool](CHAP_Source.Netezza.md)
+ [Connecting Oracle Data Warehouse with AWS SCT](CHAP_Source.OracleDW.md)
+ [Connecting to a Snowflake data warehouse with AWS Schema Conversion Tool](CHAP_Source.Snowflake.md)
+ [Connecting to a SQL Server Data Warehouse with the AWS Schema Conversion Tool](CHAP_Source.SQLServerDW.md)
+ [Connecting to a Teradata Data Warehouse with the AWS Schema Conversion Tool](CHAP_Source.Teradata.md)
+ [Connecting the AWS Schema Conversion Tool to Vertica databases](CHAP_Source.Vertica.md)

# Connecting Amazon Redshift with the AWS Schema Conversion Tool
<a name="CHAP_Source.Redshift"></a>

You can use AWS SCT to optimize your Amazon Redshift cluster. AWS SCT provides you with recommendations on the selection of distribution and sort keys for your Amazon Redshift cluster. You can consider the Amazon Redshift optimization project as an AWS SCT project with the source and target pointing to the different Amazon Redshift clusters.

## Privileges for Amazon Redshift as a source database
<a name="CHAP_Source.Redshift.Permissions"></a>

The following privileges are required for using Amazon Redshift as a source:
+ USAGE ON SCHEMA *<schema\$1name>* 
+ SELECT ON ALL TABLES IN SCHEMA *<schema\$1name>* 
+ SELECT ON PG\$1CATALOG.PG\$1STATISTIC 
+ SELECT ON SVV\$1TABLE\$1INFO 
+ SELECT ON TABLE STV\$1BLOCKLIST 
+ SELECT ON TABLE STV\$1TBL\$1PERM 
+ SELECT ON SYS\$1SERVERLESS\$1USAGE 
+ SELECT ON PG\$1DATABASE\$1INFO 
+ SELECT ON PG\$1STATISTIC 

In the preceding examples, replace the *<schema\$1name>* placeholder with the name of the source schema.

For the privileges required for Amazon Redshift as a target, see [Permissions for Amazon Redshift as a target](CHAP_Converting.DW.md#CHAP_Converting.DW.ConfigureTarget).

## Connecting to Amazon Redshift as a source
<a name="CHAP_Source.Redshift.Connecting"></a>

Use the following procedure to connect to your Amazon Redshift source database with the AWS Schema Conversion Tool. 

**To connect to an Amazon Redshift source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Amazon Redshift**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the connection information for the Amazon Redshift source database, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Redshift.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Amazon Redshift optimization settings
<a name="CHAP_Source.Redshift.ConversionSettings"></a>

To edit Amazon Redshift optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Amazon Redshift**, and then choose **Amazon Redshift – Amazon Redshift**. AWS SCT displays all available settings for Amazon Redshift optimization.

Amazon Redshift optimization settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if the number of tables is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To choose the migration strategy.

  AWS recommends using different clusters as a source and target for your optimization project. Before the start of the Amazon Redshift optimization process, you create a copy of your source Amazon Redshift cluster. You can include your source data into this copy or create an empty cluster.

  For **Migration strategy**, choose **Migration to a copy** to include data from your source cluster in the target cluster.

  For **Migration strategy**, choose **Migration to a clean slate** to review the optimization suggestions. After you accept these suggestions, migrate your source data to the target cluster.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you selected the **Use compression encoding** option.
+ To work with automatic table optimization.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To use only on the automatic table optimization, choose **Optimization strategies** in the left pane. Then select **Use Amazon Redshift automatic table tuning**, and choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**.
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with a skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user whose query statistics you want to analyze.

# Connecting Azure Synapse Analytics with AWS Schema Conversion Tool
<a name="CHAP_Source.AzureSynapse"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Azure Synapse Analytics to Amazon Redshift. 

## Privileges for Azure Synapse Analytics as a source database
<a name="CHAP_Source.AzureSynapse.Permissions"></a>

The following privileges are required for using an Azure Synapse Analytics data warehouse as a source:
+ VIEW DEFINITION 
+ VIEW DATABASE STATE 

Apply the privileges for each database whose schema you are converting.

## Connecting to Azure Synapse Analytics as a source
<a name="CHAP_Source.AzureSynapse.Connecting"></a>

Use the following procedure to connect to your Azure Synapse Analytics data warehouse with the AWS Schema Conversion Tool. 

**To connect to an Azure Synapse Analytics data warehouse as a source**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Azure Synapse Analytics**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the connection information for the Azure Synapse Analytics data warehouse manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.AzureSynapse.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database.

1. Choose **Connect** to connect to your source database.

## Azure Synapse Analytics to Amazon Redshift conversion settings
<a name="CHAP_Source.AzureSynapse.ConversionSettings"></a>

To edit Azure Synapse Analytics to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Azure Synapse**, and then choose **Azure Synapse – Amazon Redshift**. AWS SCT displays all available settings for Azure Synapse Analytics to Amazon Redshift conversion.

Azure Synapse Analytics to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To migrate partitions of the source table to separate tables in Amazon Redshift. To do so, select **Use the UNION ALL view** and enter the maximum number of target tables that AWS SCT can create for a single source table.

  Amazon Redshift doesn't support table partitioning. To emulate this behavior and make queries run faster, AWS SCT can migrate each partition of your source table to a separate table in Amazon Redshift. Then, AWS SCT creates a view that includes data from all these tables.

  AWS SCT automatically determines the number of partitions in your source table. Depending on the type of source table partitioning, this number can exceed the quota for the tables that you can apply to your Amazon Redshift cluster. To avoid reaching this quota, enter the maximum number of target tables that AWS SCT can create for partitions of a single source table. The default option is 368 tables, which represents a partition for 366 days of a year and two tables for `NO RANGE` and `UNKNOWN` partitions.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## Azure Synapse Analytics to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.AzureSynapse.ConversionOptimizationSettings"></a>

To edit Azure Synapse Analytics to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Azure Synapse**, and then choose **Azure Synapse – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for Azure Synapse Analytics to Amazon Redshift conversion.

Azure Synapse Analytics to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting to Google BigQuery with AWS Schema Conversion Tool
<a name="CHAP_Source.BigQuery"></a>

You can use AWS SCT to convert schemas, code objects, and application code from BigQuery to Amazon Redshift. 

## Privileges for BigQuery as a source
<a name="CHAP_Source.BigQuery.Permissions"></a>

To use a BigQuery data warehouse as a source in AWS SCT, create a service account. In Google Cloud, applications use service accounts to make authorized API calls. Service accounts differ from user accounts. For more information, see [Service accounts](https://cloud.google.com/iam/docs/service-accounts) in the Google Cloud Identity and Access Management documentation.

Make sure that you grant the following roles to your service account:
+ `BigQuery Admin`
+ `Storage Admin`

The `BigQuery Admin` role provides permissions to manage all resources within the project. AWS SCT uses this role to load your BigQuery metadata in the migration project.

The `Storage Admin` role grants full control of data objects and buckets. You can find this role under `Cloud Storage`. AWS SCT uses this role to extract your data from BigQuery and then load it into Amazon Redshift.

**To create a service account key file**

1. Sign in to the Google Cloud management console at [https://console.cloud.google.com/](https://console.cloud.google.com/).

1. On the [BigQuery API](https://console.cloud.google.com/apis/library/bigquery.googleapis.com) page, choose **Enable**. Skip this step if you see **API Enabled**.

1. On the [Service accounts](https://console.cloud.google.com/iam-admin/serviceaccounts) page, choose your project, and then choose **Create service account**.

1. On the **Service account details** page, enter a descriptive value for **Service account name**. Choose **Create and continue**. The **Grant this service account access to the project** page opens. 

1. For **Select a role**, choose **BigQuery**, and then choose **BigQuery Admin**. 

1. Choose **Add another role**. For **Select a role**, choose **Cloud Storage**, and then choose **Storage Admin**. 

1. Choose **Continue**, and then choose **Done**. 

1. On the [Service accounts](https://console.cloud.google.com/iam-admin/serviceaccounts) page, choose the service account that you created.

1. Choose **Keys**, and then choose **Create new key** for **Add key**.

1. Choose **JSON**, and then choose **Create**. Choose the folder to save your private key or select the default folder for downloads in your browser.

To extract data from a BigQuery data warehouse, AWS SCT uses the Google Cloud Storage bucket folder. Create this bucket before you start data migration. Enter the path to your Google Cloud Storage bucket folder in the **Create Local task** dialog box. For more information, see [Creating, running, and monitoring an AWS SCT task](agents.md#agents.Tasks).

## Connecting to BigQuery as a source
<a name="CHAP_Source.BigQuery.Connecting"></a>

Use the following procedure to connect to your source BigQuery project with the AWS Schema Conversion Tool.

**To connect to a BigQuery source data warehouse**

1. In the AWS Schema Conversion Tool, choose **Add source**.

1. Choose **BigQuery**, then choose **Next**.

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your BigQuery project. AWS SCT displays this name in the tree in the left panel.

1. For **Key path**, enter the path to the service account key file. For more information about creating this file, see [Privileges for BigQuery as a source](#CHAP_Source.BigQuery.Permissions). 

1. Choose **Test Connection** to verify that AWS SCT can connect to your source BigQuery project. 

1. Choose **Connect** to connect to your source BigQuery project.

## Limitations on using BigQuery as a source for AWS SCT
<a name="CHAP_Source.BigQuery.Limitations"></a>

The following limitations apply when using BigQuery as a source for AWS SCT:
+ AWS SCT doesn't support the conversion of subqueries in analytic functions.
+ You can't use AWS SCT to convert BigQuery `SELECT AS STRUCT` and `SELECT AS VALUE` statements.
+ AWS SCT doesn't support the conversion of the following types of functions:
  + Approximate aggregate
  + Bit
  + Debugging
  + Federated query
  + Geography
  + Hash
  + Mathematical
  + Net
  + Statistical aggregate
  + UUID
+ AWS SCT provides limited support for the conversion of string functions. 
+ AWS SCT doesn't support the conversion of `UNNEST` operators.
+ You can't convert correlated join operations in AWS SCT.
+ AWS SCT doesn't support the conversion of `QUALIFY`, `WINDOW`, `LIMIT`, and `OFFSET` clauses.
+ You can't use AWS SCT to convert recursive common table expressions.
+ AWS SCT doesn't support the conversion of `INSERT` statements with subqueries inside `VALUES` clauses.
+ AWS SCT doesn't support the conversion of `UPDATE` statements for nested fields and repeated records.
+ You can't use AWS SCT to convert `STRUCT` and `ARRAY` data types.

## BigQuery to Amazon Redshift conversion settings
<a name="CHAP_Source.BigQuery.ConversionSettings"></a>

To edit BigQuery to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Google BigQuery**, and then choose **Google BigQuery – Amazon Redshift**. AWS SCT displays all available settings for BigQuery to Amazon Redshift conversion.

BigQuery to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## BigQuery to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.BigQuery.ConversionOptimizationSettings"></a>

To edit BigQuery to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Google BigQuery**, and then choose **Google BigQuery – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for BigQuery to Amazon Redshift conversion.

BigQuery to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting Greenplum Database with AWS Schema Conversion Tool
<a name="CHAP_Source.Greenplum"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Greenplum Database to Amazon Redshift. 

## Privileges for Greenplum Database as a source
<a name="CHAP_Source.Greenplum.Permissions"></a>

The following privileges are required for using Greenplum Database as a source:
+ CONNECT ON DATABASE *<database\$1name>* 
+ USAGE ON SCHEMA *<schema\$1name>* 
+ SELECT ON *<schema\$1name>.<table\$1name>* 
+ SELECT ON SEQUENCE *<schema\$1name>.<sequence\$1name>* 

In the preceding example, replace placeholders as following:
+ Replace *database\$1name* with the name of the source database.
+ Replace *schema\$1name* with the name of the source schema.
+ Replace *table\$1name* with the name of the source table.
+ Replace *sequence\$1name* with the name of the sequence name.

## Connecting to Greenplum Database as a source
<a name="CHAP_Source.Greenplum.Connecting"></a>

Use the following procedure to connect to your Greenplum source database with AWS SCT.

**To connect to a Greenplum source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **SAP ASE**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Greenplum source database credentials manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Greenplum.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Greenplum to Amazon Redshift conversion settings
<a name="CHAP_Source.Greenplum.ConversionSettings"></a>

To edit Greenplum to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Greenplum**, and then choose **Greenplum – Amazon Redshift**. AWS SCT displays all available settings for Greenplum to Amazon Redshift conversion.

Greenplum to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To migrate partitions of the source table to separate tables in Amazon Redshift. To do so, select **Use the UNION ALL view** and enter the maximum number of target tables that AWS SCT can create for a single source table.

  Amazon Redshift doesn't support table partitioning. To emulate this behavior and make queries run faster, AWS SCT can migrate each partition of your source table to a separate table in Amazon Redshift. Then, AWS SCT creates a view that includes data from all these tables.

  AWS SCT automatically determines the number of partitions in your source table. Depending on the type of source table partitioning, this number can exceed the quota for the tables that you can apply to your Amazon Redshift cluster. To avoid reaching this quota, enter the maximum number of target tables that AWS SCT can create for partitions of a single source table. The default option is 368 tables, which represents a partition for 366 days of a year and two tables for `NO RANGE` and `UNKNOWN` partitions.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## Greenplum to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.Greenplum.ConversionOptimizationSettings"></a>

To edit Greenplum to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Greenplum**, and then choose **Greenplum – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for Greenplum to Amazon Redshift conversion.

Greenplum to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting to Netezza with AWS Schema Conversion Tool
<a name="CHAP_Source.Netezza"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Netezza to Amazon Redshift. 

## Privileges for Netezza as a source
<a name="CHAP_Source.Netezza.Permissions"></a>

The following privileges are required for using Netezza as a source:
+ select on system.definition\$1schema.system view
+ select on system.definition\$1schema.system table
+ select on system.definition\$1schema.management table
+ list on *<database\$1name>*
+ list on *<schema\$1name>*
+ list on *<database\$1name>*.all.table
+ list on *<database\$1name>*.all.external table
+ list on *<database\$1name>*.all.view
+ list on *<database\$1name>*.all.materialized view
+ list on *<database\$1name>*.all.procedure
+ list on *<database\$1name>*.all.sequence
+ list on *<database\$1name>*.all.function
+ list on *<database\$1name>*.all.aggregate

In the preceding example, replace placeholders as following:
+ Replace *database\$1name* with the name of the source database.
+ Replace *schema\$1name* with the name of the source schema.

AWS SCT requires access to the following system tables and views. You can grant access to these objects instead of granting access to `system.definition_schema.system view` and `system.definition_schema.system tables` in the preceding list.
+ select on system.definition\$1schema.\$1t\$1aggregate
+ select on system.definition\$1schema.\$1t\$1class
+ select on system.definition\$1schema.\$1t\$1constraint
+ select on system.definition\$1schema.\$1t\$1const\$1relattr
+ select on system.definition\$1schema.\$1t\$1database
+ select on system.definition\$1schema.\$1t\$1grpobj\$1priv
+ select on system.definition\$1schema.\$1t\$1grpusr
+ select on system.definition\$1schema.\$1t\$1hist\$1config
+ select on system.definition\$1schema.\$1t\$1object
+ select on system.definition\$1schema.\$1t\$1object\$1classes
+ select on system.definition\$1schema.\$1t\$1proc
+ select on system.definition\$1schema.\$1t\$1type
+ select on system.definition\$1schema.\$1t\$1user
+ select on system.definition\$1schema.\$1t\$1usrobj\$1priv
+ select on system.definition\$1schema.\$1vt\$1sequence
+ select on system.definition\$1schema.\$1v\$1aggregate
+ select on system.definition\$1schema.\$1v\$1constraint\$1depends
+ select on system.definition\$1schema.\$1v\$1database
+ select on system.definition\$1schema.\$1v\$1datatype
+ select on system.definition\$1schema.\$1v\$1dslice
+ select on system.definition\$1schema.\$1v\$1function
+ select on system.definition\$1schema.\$1v\$1group
+ select on system.definition\$1schema.\$1v\$1obj\$1relation
+ select on system.definition\$1schema.\$1v\$1obj\$1relation\$1xdb
+ select on system.definition\$1schema.\$1v\$1procedure
+ select on system.definition\$1schema.\$1v\$1relation\$1column
+ select on system.definition\$1schema.\$1v\$1relation\$1keydata
+ select on system.definition\$1schema.\$1v\$1relobjclasses
+ select on system.definition\$1schema.\$1v\$1schema\$1xdb
+ select on system.definition\$1schema.\$1v\$1sequence
+ select on system.definition\$1schema.\$1v\$1synonym
+ select on system.definition\$1schema.\$1v\$1system\$1info
+ select on system.definition\$1schema.\$1v\$1sys\$1constraint
+ select on system.definition\$1schema.\$1v\$1sys\$1object\$1dslice\$1info
+ select on system.definition\$1schema.\$1v\$1sys\$1user
+ select on system.definition\$1schema.\$1v\$1table
+ select on system.definition\$1schema.\$1v\$1table\$1constraint
+ select on system.definition\$1schema.\$1v\$1table\$1dist\$1map
+ select on system.definition\$1schema.\$1v\$1table\$1organize\$1column
+ select on system.definition\$1schema.\$1v\$1table\$1storage\$1stat
+ select on system.definition\$1schema.\$1v\$1user
+ select on system.definition\$1schema.\$1v\$1view
+ select on system.information\$1schema.\$1v\$1relation\$1column
+ select on system.information\$1schema.\$1v\$1table
+ select on \$1hist\$1column\$1access\$1\$1

## Connecting to Netezza as a source
<a name="CHAP_Source.Netezza.Connecting"></a>

Use the following procedure to connect to your Netezza source database with the AWS Schema Conversion Tool. 

**To connect to a Netezza source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Netezza**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Netezza source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Netezza.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Configuring ongoing data replication
<a name="CHAP_Source.Netezza.CDC"></a>

After you convert your Netezza database schemas and apply them to your Amazon Redshift database, you can migrate data with AWS SCT data extraction agents. The agent extracts your data and uploads it to your Amazon S3 bucket. You can then use AWS SCT to copy the data from Amazon S3 to Amazon Redshift.

If data in your source database changes during the migration process, you can capture ongoing changes with your AWS SCT data extraction agents. Then you can replicate these ongoing changes in your target database after you complete the initial data migration. This process is called ongoing data replication or *change data capture* (CDC).

**To configure ongoing data replication for migrations from Netezza to Amazon Redshift**

1. In your source database, create a history database. You can use the following code example in the Netezza command line interface (CLI).

   ```
   nzhistcreatedb -d history_database_name -t query -v 1 -u load_user -o histdb_owner -p your_password
   ```

   In the preceding example, replace *history\$1database\$1name* with the name of your history database. Next, replace *load\$1user* with the name of the user that you have defined to load history data to the database. Then, replace *histdb\$1owner* with the name of the user that you have defined as the owner of the history database. Make sure that you have already created this user and granted the `CREATE DATABASE` permission. Finally, replace *your\$1password* with a secure password.

1. Configure the history logging. To do so, use the following code example.

   ```
   CREATE HISTORY CONFIGURATION history_configuration_name HISTTYPE QUERY
       DATABASE history_database_name USER load_user PASSWORD your_password COLLECT PLAN, COLUMN
       LOADINTERVAL 1 LOADMINTHRESHOLD 0 LOADMAXTHRESHOLD 0 STORAGELIMIT 25
       LOADRETRY 2 VERSION 1;
   ```

   In the preceding example, replace *history\$1configuration\$1name* and *history\$1database\$1name* with the names of your history configuration and your history database. Next, replace *load\$1user* with the name of the user that you have defined to load history data to the database. Then, replace *your\$1password* with a secure password.

1. Grant read permissions for all tables in the history database. You can use the following code example to grant the `SELECT` permission.

   ```
   GRANT SELECT ON history_database_name.ALL.TABLE TO your_user;
   ```

   In the preceding example, replace *history\$1database\$1name* with the name of your history database. Next, replace *your\$1user* with the name of the user with minimal permissions to work with your Netezza database. You use the credentials of this database user in AWS SCT.

1. Collect statistics for each table in your source schema to get the information about the cardinality of columns. You can use the following command to generate statistics in your history database.

   ```
   GENERATE STATISTICS on "schema_name"."table_name";
   ```

   In the preceding example, replace *schema\$1name* and *table\$1name* with the name of your database schema and table.

1. Make sure that you completed the prerequisites by running the following query:

   ```
   SELECT COUNT(*) FROM history_database_name.history_schema_name."$hist_column_access_N";
   ```

   In the preceding example, replace *history\$1database\$1name* and *history\$1schema\$1name* with the name of your history database and schema. Next, replace *N* with the the version number of your history database. For more information about history database versions, see the [IBM Netezza Documentation](https://www.ibm.com/docs/en/netezza?topic=history-database-versions).  

1. Install your data extraction agents. For more information, see [Installing extraction agents](agents.md#agents.Installing).

   Make sure that the `{working.folder}` parameter in the `settings.properties` file for all extractor instances points to the same folder. In this case, your extractors can coordinate the CDC session and use a single transaction point for all subtasks.

1. Register your data extraction agent. For more information, see [Registering extraction agents with the AWS Schema Conversion Tool](agents.md#agents.Using).

1. Create your CDC task. For more information, see [Creating, running, and monitoring an AWS SCT task](agents.md#agents.Tasks).

   1. Open your project in AWS SCT. In the left pane, choose your source table. Open the context (right-click) menu, and choose **Create local task**.

   1. For **Task name**, enter a descriptive name for your data migration task.

   1. For **Migration mode**, choose **Extract, upload, and copy**.

   1. Select **Enable CDC**.

   1. Choose the **CDC settings** tab and define the scope and the schedule of CDC sessions.

   1. Choose **Test task** to verify that you can connect to your working folder, Amazon S3 bucket, and Amazon Redshift data warehouse.

   1. Choose **Create** to create your task.

   1. Choose the **Tasks** tab, choose your task from the list, and choose **Start**.

1. The AWS SCT task maintains transactional consistency on the target database. The data extraction agent replicates transactions from the source in transaction ID order. 

   If you stop any of the migration sessions or if it fails, then the CDC processing also stops.

## Netezza to Amazon Redshift conversion settings
<a name="CHAP_Source.Netezza.ConversionSettings"></a>

To edit Netezza to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Netezza**, and then choose **Netezza – Amazon Redshift**. AWS SCT displays all available settings for Netezza to Amazon Redshift conversion.

Netezza to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## Netezza to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.Netezza.ConversionOptimizationSettings"></a>

To edit Netezza to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Netezza**, and then choose **Netezza – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for Netezza to Amazon Redshift conversion.

Netezza to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting Oracle Data Warehouse with AWS SCT
<a name="CHAP_Source.OracleDW"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Oracle Data Warehouse to Amazon Redshift or Amazon Redshift and AWS Glue used in combination. 

## Privileges for Oracle Data Warehouse as a source
<a name="CHAP_Source.OracleDW.Permissions"></a>

The following privileges are required for using Oracle Data Warehouse as a source:
+ connect 
+ select\$1catalog\$1role 
+ select any dictionary 

## Connecting to Oracle Data Warehouse as a source
<a name="CHAP_Source.OracleDW.Connecting"></a>

Use the following procedure to connect to your Oracle data warehouse source database with the AWS Schema Conversion Tool. 

**To connect to an Oracle Data Warehouse source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Oracle**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Oracle source data warehouse connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.OracleDW.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Oracle Data Warehouse to Amazon Redshift conversion settings
<a name="CHAP_Source.OracleDW.ConversionSettings"></a>

To edit Oracle Data Warehouse to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Oracle**, and then choose **Oracle – Amazon Redshift**. AWS SCT displays all available settings for Oracle Data Warehouse to Amazon Redshift conversion.

Oracle Data Warehouse to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To migrate partitions of the source table to separate tables in Amazon Redshift. To do so, select **Use the UNION ALL view** and enter the maximum number of target tables that AWS SCT can create for a single source table.

  Amazon Redshift doesn't support table partitioning. To emulate this behavior and make queries run faster, AWS SCT can migrate each partition of your source table to a separate table in Amazon Redshift. Then, AWS SCT creates a view that includes data from all these tables.

  AWS SCT automatically determines the number of partitions in your source table. Depending on the type of source table partitioning, this number can exceed the quota for the tables that you can apply to your Amazon Redshift cluster. To avoid reaching this quota, enter the maximum number of target tables that AWS SCT can create for partitions of a single source table. The default option is 368 tables, which represents a partition for 366 days of a year and two tables for `NO RANGE` and `UNKNOWN` partitions.
+ To convert the data type formatting functions such as `TO_CHAR`, `TO_DATE`, and `TO_NUMBER` with datetime format elements that Amazon Redshift doesn't support. By default, AWS SCT uses the extension pack functions to emulate the usage of these unsupported format elements in the converted code.

  The datetime format model in Oracle includes more elements compared to datetime format strings in Amazon Redshift. When your source code includes only datetime format elements that Amazon Redshift supports, you don't need the extension pack functions in the converted code. To avoid using the extension pack functions in the converted code, select **Datetype format elements that you use in the Oracle code are similar to datetime format strings in Amazon Redshift**. In this case, the converted code works faster.

  The numeric format model in Oracle includes more elements compared to numeric format strings in Amazon Redshift. When your source code includes only numeric format elements that Amazon Redshift supports, you don't need the extension pack functions in the converted code. To avoid using the extension pack functions in the converted code, select **Numeric format elements that you use in the Oracle code are similar to numeric format strings in Amazon Redshift**. In this case, the converted code works faster.
+ To convert Oracle `LEAD` and `LAG` analytic functions. By default, AWS SCT raises an action item for each `LEAD` and `LAG` function.

  When your source code doesn't use the default values for offset in these functions, AWS SCT can emulate the usage of these functions with the `NVL` function. To do so, select **Use the NVL function to emulate the behavior of Oracle LEAD and LAG functions**.
+ To emulate the behavior of primary and unique keys in your Amazon Redshift cluster, select **Emulate the behavior of primary and unique keys**.

  Amazon Redshift doesn't enforce unique and primary keys and uses them for informational purposes only. If you use these constraints in your code, then make sure that AWS SCT emulates their behavior in the converted code.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## Oracle Data Warehouse to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.OracleDW.ConversionOptimizationSettings"></a>

To edit Oracle Data Warehouse to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Oracle**, and then choose **Oracle – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for Oracle Data Warehouse to Amazon Redshift conversion.

Oracle Data Warehouse to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting to a Snowflake data warehouse with AWS Schema Conversion Tool
<a name="CHAP_Source.Snowflake"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Snowflake to Amazon Redshift.

## Privileges for Snowflake as a source database
<a name="CHAP_Source.Snowflake.Permissions"></a>

You can create a role with privileges and grant this role the name of a user by using the `SECURITYADMIN` role and the `SECURITYADMIN` session context.

The example following creates minimal privileges and grants them to the `min_privs` user. 

```
create role role_name;
grant role role_name to role sysadmin;
grant usage on database db_name to role role_name;
grant usage on schema db_name.schema_name to role role_name;             
grant usage on warehouse datawarehouse_name to role role_name;
grant monitor on database db_name to role role_name;
grant monitor on warehouse datawarehouse_name to role role_name;
grant select on all tables in schema db_name.schema_name to role role_name;
grant select on future tables in schema db_name.schema_name to role role_name;
grant select on all views in schema db_name.schema_name to role role_name;
grant select on future views in schema db_name.schema_name to role role_name;
grant select on all external tables in schema db_name.schema_name to role role_name;
grant select on future external tables in schema db_name.schema_name to role role_name;
grant usage on all sequences in schema db_name.schema_name to role role_name;
grant usage on future sequences in schema db_name.schema_name to role role_name;
grant usage on all functions in schema db_name.schema_name to role role_name;
grant usage on future functions in schema db_name.schema_name to role role_name;
grant usage on all procedures in schema db_name.schema_name to role role_name;
grant usage on future procedures in schema db_name.schema_name to role role_name;
create user min_privs password='real_user_password'  
DEFAULT_ROLE = role_name DEFAULT_WAREHOUSE = 'datawarehouse_name';
grant role role_name to user min_privs;
```

In the preceding example, replace placeholders as following:
+ Replace *`role_name`* with the name of a role with read-only privileges.
+ Replace `db_name` with the name of the source database.
+ Replace `schema_name` with the name of the source schema.
+ Replace *`datawarehousename`* with the name of a required data warehouse.
+ Replace `min_privs` with the name of a user that has minimal privileges.

The `DEFAULT_ROLE` and `DEFAULT_WAREHOUSE` parameters are key-sensitive.

## Configuring secure access to Amazon S3
<a name="CHAP_Source.Snowflake.IAM"></a>

Security and access management policies for an Amazon S3 bucket allow Snowflake to access, read data from, and write data to the S3 bucket. You can configure secure access to a private Amazon S3 bucket using the Snowflake `STORAGE INTEGRATION` object type. A Snowflake storage integration object delegates authentication responsibility to a Snowflake identity and access management entity.

For more information, see [Configuring a Snowflake Storage Integration to Access Amazon S3](https://docs.snowflake.com/en/user-guide/data-load-s3-config-storage-integration.html) in the Snowflake documentation.

## Connecting to Snowflake as a source
<a name="CHAP_Source.Snowflake.Connecting"></a>

Use the following procedure to connect to your source database with the AWS Schema Conversion Tool. 

**To connect to an Snowflake source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Snowflake**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Snowflake source data warehouse connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Snowflake.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## Limitations for Snowflake as a source
<a name="CHAP_Source.Snowflake.Limitations"></a>

The following are limitations when using Snowflake as a source for AWS SCT:
+ Object identifiers must be unique within the context of the object type and the parent object:  
**Database**  
Schema identifiers must be unique within a database.  
**Schemas**  
Objects identifiers such as for tables and views must be unique within a schema.  
**Tables/Views**  
Column identifiers must be unique within a table.
+ The maximum number of tables for large and xlarge cluster node types is 9,900. For 8xlarge cluster node types, the maximum number of tables is 100,000. The limit includes temporary tables, both user-defined and created by Amazon Redshift during query processing or system maintenance. For more information, see [Amazon Redshift quotas](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Cluster Management Guide*.
+ For stored procedures, the maximum number of input and output arguments is 32.

## Source data types for Snowflake
<a name="CHAP_Source.Snowflake.DataTypes"></a>

Following, you can find the Snowflake source data types that are supported when using AWS SCT and the default mapping to an Amazon Redshift target.


| Snowflake data types | Amazon Redshift data types | 
| --- | --- | 
|  NUMBER  |  NUMERIC(38)  | 
|  NUMBER(p)  |  If p is =< 4, then SMALLINT If p is => 5 and =< 9, then INTEGER If p is => 10 and =< 18, then BIGINT If p is => 19 then NUMERIC(p)   | 
|  NUMBER(p, 0)  |  If p is =< 4, then SMALLINT If p is => 5 and =< 9, then INTEGER If p is => 10 and =< 18, then BIGINT If p is => 19 then: NUMERIC(p,0)  | 
|  NUMBER(p, s)  |  If p is => 1 and =< 38, and if s is => 1 and =< 37, then NUMERIC(p,s)   | 
|  FLOAT  | FLOAT | 
|  TEXT Unicode characters up to 16,777,216 bytes; up to 4 bytes per character.  |  VARCHAR(MAX)  | 
|  TEXT(p) Unicode characters up to 65,535 bytes; up to 4 bytes per character.  |  If p is =< 65,535 then, VARCHAR(p)  | 
|  TEXT(p) Unicode characters up to 16,777,216 bytes; up to 4 bytes per character.  |  If p is => 65,535 and =< 16,777,216 then, VARCHAR(MAX)  | 
|  BINARY Single-byte characters up to 8,388,608 bytes; 1 byte per character.  | VARCHAR(MAX) | 
|  BINARY(p) Single-byte characters up to 65,535 bytes; 1 byte per character.  | VARCHAR(p) | 
|  BINARY(p) Single-byte characters up to 8,388,608 bytes; 1 byte per character.  | VARCHAR(MAX) | 
|  BOOLEAN  | BOOLEAN | 
|  DATE  | DATE | 
|  TIME Time values between 00:00:00 and 23:59:59.999999999.  | VARCHAR(18) | 
|  TIME(f) Time values between 00:00:00 and 23:59:59.9(f).   | VARCHAR(n) – 9 \$1 dt-attr-1 | 
|  TIMESTAMP\$1NTZ  | TIMESTAMP | 
|  TIMESTAMP\$1TZ  | TIMESTAMPTZ | 

## Snowflake to Amazon Redshift conversion settings
<a name="CHAP_Source.Snowflake.ConversionSettings"></a>

To edit Snowflake to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Snowflake**, and then choose **Snowflake – Amazon Redshift**. AWS SCT displays all available settings for Snowflake to Amazon Redshift conversion.

Snowflake to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## Snowflake to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.Snowflake.ConversionOptimizationSettings"></a>

To edit Snowflake to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Snowflake**, and then choose **Snowflake – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for Snowflake to Amazon Redshift conversion.

Snowflake to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting to a SQL Server Data Warehouse with the AWS Schema Conversion Tool
<a name="CHAP_Source.SQLServerDW"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Microsoft SQL Server DW to Amazon Redshift or Amazon Redshift and AWS Glue used in combination. 

## Privileges for Microsoft SQL Server Data Warehouse as a source
<a name="CHAP_Source.SQLServerDW.Permissions"></a>

The following privileges are required for using Microsoft SQL Server data warehouse as a source:
+ VIEW DEFINITION 
+ VIEW DATABASE STATE 
+ SELECT ON SCHEMA :: *<schema\$1name>* 

In the preceding example, replace the *<source\$1schema>* placeholder with the name of the source source\$1schema.

Repeat the grant for each database whose schema you are converting. 

In addition, grant the following, and run the grant on the master database: 
+ VIEW SERVER STATE 

## Limitations for SQL Server Data Warehouse as a source
<a name="CHAP_Source.SQLServerDW.Limitations"></a>

Using Microsoft SQL Server Parallel Data Warehouse (PDW) as a source isn't currently supported.

## Connecting to SQL Server Data Warehouse as a source
<a name="CHAP_Source.SQLServerDW.Connecting"></a>

Use the following procedure to connect to your SQL Server Data Warehouse source database with the AWS Schema Conversion Tool. 

**To connect to a SQL Server Data Warehouse source database**

1. In the AWS Schema Conversion Tool, choose **Add source**.

1. Choose **Microsoft SQL Server**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Microsoft SQL Server source data warehouse connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.SQLServerDW.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

## SQL Server Data Warehouse to Amazon Redshift conversion settings
<a name="CHAP_Source.SQLServerDW.ConversionSettings"></a>

To edit SQL Server Data Warehouse to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Microsoft SQL Server**, and then choose **Microsoft SQL Server – Amazon Redshift**. AWS SCT displays all available settings for SQL Server Data Warehouse to Amazon Redshift conversion.

SQL Server Data Warehouse to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To migrate partitions of the source table to separate tables in Amazon Redshift. To do so, select **Use the UNION ALL view** and enter the maximum number of target tables that AWS SCT can create for a single source table.

  Amazon Redshift doesn't support table partitioning. To emulate this behavior and make queries run faster, AWS SCT can migrate each partition of your source table to a separate table in Amazon Redshift. Then, AWS SCT creates a view that includes data from all these tables.

  AWS SCT automatically determines the number of partitions in your source table. Depending on the type of source table partitioning, this number can exceed the quota for the tables that you can apply to your Amazon Redshift cluster. To avoid reaching this quota, enter the maximum number of target tables that AWS SCT can create for partitions of a single source table. The default option is 368 tables, which represents a partition for 366 days of a year and two tables for `NO RANGE` and `UNKNOWN` partitions.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## SQL Server Data Warehouse to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.SQLServerDW.ConversionOptimizationSettings"></a>

To edit SQL Server Data Warehouse to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Microsoft SQL Server**, and then choose **Microsoft SQL Server – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for SQL Server Data Warehouse to Amazon Redshift conversion.

SQL Server Data Warehouse to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting to a Teradata Data Warehouse with the AWS Schema Conversion Tool
<a name="CHAP_Source.Teradata"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Teradata to Amazon Redshift or Amazon Redshift and AWS Glue used in combination.

## Privileges for Teradata as a source
<a name="CHAP_Source.Teradata.Permissions"></a>

The following privileges are required for using Teradata as a source:
+ SELECT ON DBC 
+ SELECT ON SYSUDTLIB 
+ SELECT ON SYSLIB 
+ SELECT ON *<source\$1database>* 
+ CREATE PROCEDURE ON *<source\$1database>* 

In the preceding example, replace the *<source\$1database>* placeholder with the name of the source database.

AWS SCT requires the CREATE PROCEDURE privilege to perform HELP PROCEDURE against all procedures in the source database. AWS SCT doesn't use this privilege to create any new objects in your source Teradata database.

## Connecting to Teradata as a source
<a name="CHAP_Source.Teradata.Connecting"></a>

Use the following procedure to connect to your Teradata source database with the AWS Schema Conversion Tool. 

**To connect to a Teradata source database**

1. In the AWS Schema Conversion Tool, choose **Add source**. 

1. Choose **Teradata**, then choose **Next**. 

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Teradata source database connection information manually, use the following instructions:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Teradata.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database. 

1. Choose **Connect** to connect to your source database.

### Using LDAP authentication with a Teradata source
<a name="CHAP_Source.Teradata.Connecting.LDAP"></a>

To set up Lightweight Directory Access Protocol (LDAP) authentication for Teradata users who run Microsoft Active Directory in Windows, use the following procedure. 

In the following procedure, the Active Directory domain is `test.local.com`. The Windows server is `DC`, and it's configured with default settings. The following script creates the `test_ldap` Active Directory account, and this account uses the `test_ldap` password.

**To set up LDAP authentication for Teradata users who run Microsoft Active Directory in Windows**

1. In the `/opt/teradata/tdat/tdgss/site` directory, edit the file `TdgssUserConfigFile.xml` . Change the LDAP section to the following.

   ```
   AuthorizationSupported="no"
   
   LdapServerName="DC.test.local.com"
   LdapServerPort="389"
   LdapServerRealm="test.local.com"
   LdapSystemFQDN="dc= test, dc= local, dc=com"
   LdapBaseFQDN="dc=test, dc=local, dc=com"
   ```

1. Apply the changes by running the configuration as follows.

   ```
   #cd /opt/teradata/tdgss/bin
   #./run_tdgssconfig
   ```

1. Test the configuration by running the following command.

   ```
   # /opt/teradata/tdat/tdgss/14.10.03.01/bin/tdsbind -u test_ldap -w test_ldap
   ```

   The output should be similar to the following.

   ```
   LdapGroupBaseFQDN: dc=Test, dc=local, dc=com
   LdapUserBaseFQDN: dc=Test, dc=local, dc=com
   LdapSystemFQDN: dc= test, dc= local, dc=com
   LdapServerName: DC.test.local.com
   LdapServerPort: 389
   LdapServerRealm: test.local.com
   LdapClientUseTls: no
   LdapClientTlsReqCert: never
   LdapClientMechanism: SASL/DIGEST-MD5
   LdapServiceBindRequired: no
   LdapClientTlsCRLCheck: none
   LdapAllowUnsafeServerConnect: yes
   UseLdapConfig: no
   AuthorizationSupported: no
   FQDN: CN=test, CN=Users, DC=Anthem, DC=local, DC=com
   AuthUser: ldap://DC.test.local.com:389/CN=test1,CN=Users,DC=test,DC=local,DC=com
   DatabaseName: test
   Service: tdsbind
   ```

1. Restart TPA using the following command.

   ```
   #tpareset -f "use updated TDGSSCONFIG GDO"
   ```

1. Create the same user in the Teradata database as in Active Directory, as shown following.

   ```
   CREATE USER test_ldap AS PERM=1000, PASSWORD=test_ldap;
   GRANT LOGON ON ALL TO test WITH NULL PASSWORD;
   ```

If you change the user password in Active Directory for your LDAP user, specify this new password during connection to Teradata in LDAP mode. In DEFAULT mode, you connect to Teradata by using the LDAP user name and any password.

## Configuring statistics collection in your source Teradata data warehouse
<a name="CHAP_Source.Teradata.ConfigureStatistics"></a>

To convert your source Teradata data warehouse, AWS SCT uses statistics to optimize your converted Amazon Redshift data warehouse. You can collect statistics in AWS SCT or upload the statistics file. For more information, see [Collecting or uploading statistics](CHAP_Converting.DW.md#CHAP_Converting.DW.Statistics).

To make sure that AWS SCT can collect statistics from your data warehouse, complete the following prerequisite tasks.

**To collect statistics from your Teradata data warehouse**

1. Run the following query to recollect statistics for all tables in your data warehouse.

   ```
   collect summary statistics on table_name;
   ```

   In the preceding example, replace *table\$1name* with the name of your source table. Repeat the query for each table that you convert.

1. Run the following query to determine the account string for the user, which you use to convert your data warehouse.

   ```
   select * from dbc.accountinfo where username ='user_name'
   ```

1. Turn on query logging for a specific user using the account string from the previous example.

   ```
   BEGIN QUERY LOGGING WITH OBJECTS, SQL ON ALL ACCOUNT=('$M$BUSI$S$D$H');
   ```

   Alternatively, turn on query logging for all database users.

   ```
   BEGIN QUERY LOGGING WITH SQL, OBJECTS LIMIT SQLTEXT=0 ON ALL;
   ```

After you complete collecting data warehouse statistics, turn off query logging. To do so, you can use the following code example.

```
end query logging with explain, objects, sql on all account=(' $M$BUSI$S$D$H');
```

## Collecting statistics in an offline mode from your source Teradata data warehouse
<a name="CHAP_Source.Teradata.CollectStatistics"></a>

After you configure the statistics collection in your Teradata data warehouse, you can collect statistics in your AWS SCT project. Alternatively, you can use Basic Teradata Query (BTEQ) scripts to collect statistics in an offline mode. Then, you can upload the files with collected statistics to your AWS SCT project. For more information, see [Collecting or uploading statistics](CHAP_Converting.DW.md#CHAP_Converting.DW.Statistics).

**To collect statistics from your Teradata data warehouse in an offline mode**

1. Create the `off-line_stats.bteq` script with the following content.

   ```
   .OS IF EXIST column-stats-tera.csv del /F column-stats-tera.csv
   .OS IF EXIST table-stats-tera.csv del /F table-stats-tera.csv
   .OS IF EXIST column-skew-script-tera.csv del /F column-skew-script-tera.csv
   .OS IF EXIST column-skew-stats-tera.csv del /F column-skew-stats-tera.csv
   .OS IF EXIST query-stats-tera.csv  del /F query-stats-tera.csv
   .LOGON your_teradata_server/your_login, your_password
   .EXPORT REPORT FILE = table-stats-tera.csv
   .SET TITLEDASHES OFF
   .SET WIDTH 10000
   
   SELECT
       '"' || OREPLACE(COALESCE(c.DatabaseName, ''), '"', '""') || '";' ||
       '"' || OREPLACE(COALESCE(c.TableName, ''), '"', '""') || '";' ||
       '"' || TRIM(COALESCE(s.reference_count, '0')) || '";' ||
       '"' || TRIM(COALESCE(CAST(p.RowCount AS BIGINT), '0')) || '";' ||
       '"' || CAST(CAST(w.size_in_mb AS DECIMAL (38,1) FORMAT 'Z9.9') AS VARCHAR(38)) || '";' ||
       '"' || TRIM(COALESCE(r.stat_fk_dep_count, '0')) || '";' ||
       '"' || CAST(CAST(current_timestamp(0) as timestamp(0) format 'YYYY-MM-DDBHH:MI:SS') as VARCHAR(19)) || '"'
   (TITLE '"database_name";"table_name";"reference_count";"row_count";"size_in_mb";"stat_fk_dep_count";"current_ts"')
   FROM (select databasename, tablename
           from DBC.tablesv
           where tablekind IN ('T','O')
           and databasename = 'your_database_name'
            ) c
   left join
           (select DatabaseName, TableName, max(RowCount) RowCount
           from dbc.tableStatsv
           group by 1,2)p
   on p.databasename = c.databasename
   and p.tablename = c.tablename
   left join
           (SELECT r.ChildDB as DatabaseName,
           r.ChildTable as TableName,
           COUNT(DISTINCT r.ParentTable) reference_count
           FROM DBC.All_RI_ChildrenV r
           GROUP BY r.ChildDB, r.ChildTable) s
   on s.databasename = c.databasename
   and s.tablename = c.tablename
   left join
           (SELECT r.ParentDB as DatabaseName,
           r.ParentTable as TableName,
           COUNT(DISTINCT r.ChildTable) stat_fk_dep_count
           FROM DBC.All_RI_ParentsV r
           GROUP BY r.ParentDB, r.ParentTable) r
   on r.databasename = c.databasename
   and r.tablename = c.tablename
   left join
           (select databasename, tablename,
           sum(currentperm)/1024/1024 as size_in_mb
           from dbc.TableSizeV
           group by 1,2) w
   on w.databasename = c.databasename
   and w.tablename = c.tablename
   WHERE COALESCE(r.stat_fk_dep_count,0) + COALESCE(CAST(p.RowCount AS BIGINT),0) + COALESCE(s.reference_count,0) > 0;
   
   .EXPORT RESET
   
   .EXPORT REPORT FILE = column-stats-tera.csv
   .SET TITLEDASHES OFF
   .SET WIDTH 10000
       '"' || TRIM(COALESCE(CAST(t2.card AS BIGINT), '0')) || '";' ||
   
   SELECT
   	'"' || OREPLACE(COALESCE(trim(tv.DatabaseName), ''), '"', '""') || '";' ||
       	'"' || OREPLACE(COALESCE(trim(tv.TableName), ''), '"', '""') || '";' ||
   	'"' || OREPLACE(COALESCE(trim(tv.columnname), ''), '"', '""') || '";' ||
                            '"' || TRIM(COALESCE(CAST(t2.card AS BIGINT), '0')) || '";' ||
   
   	'"' || CAST(current_timestamp AS VARCHAR(19)) || '"' (TITLE '"database_name";"table_name";"column_name";"cardinality";"current_ts"')
   FROM dbc.columnsv tv
   LEFT JOIN
   (
   	SELECT
   		c.DatabaseName	AS DATABASE_NAME,
   		c.TABLENAME 	AS TABLE_NAME,
   		c.ColumnName	AS COLUMN_NAME,
   		c.UniqueValueCount	AS CARD
   	FROM dbc.tablestatsv c
   	WHERE c.DatabaseName = 'your_database_name'
   	AND c.RowCount <> 0
   ) t2
   ON tv.DATABASENAME = t2.DATABASE_NAME
   AND tv.TABLENAME = t2.TABLE_NAME
   AND tv.COLUMNNAME = t2.COLUMN_NAME
   WHERE t2.card > 0;
   
   .EXPORT RESET
   
   .EXPORT REPORT FILE = column-skew-script-tera.csv
   .SET TITLEDASHES OFF
   .SET WIDTH 10000
   
   SELECT
   'SELECT CAST(''"' || TRIM(c.DatabaseName) || '";"' || TRIM(c.TABLENAME)  || '";"' || TRIM(c.COLUMNNAME) || '";"'' ||
   TRIM(CAST(COALESCE(MAX(cnt) * 1.0 / SUM(cnt), 0) AS NUMBER FORMAT ''9.9999'')) || ''";"'' ||
   CAST(CURRENT_TIMESTAMP(0) AS VARCHAR(19)) || ''"'' AS VARCHAR(512))
   AS """DATABASE_NAME"";""TABLE_NAME"";""COLUMN_NAME"";""SKEWED"";""CURRENT_TS"""
   FROM(
   SELECT	COUNT(*) AS cnt
   FROM "' || c.DATABASENAME || '"."' || c.TABLENAME ||
   '" GROUP BY "' || c.COLUMNNAME || '") t' ||
   	CASE WHEN ROW_NUMBER() OVER(PARTITION BY c.DATABASENAME
   	ORDER BY c.TABLENAME DESC, c.COLUMNNAME DESC) <> 1
   	THEN ' UNION ALL'
   	ELSE ';' END (TITLE '--SKEWED--')
   FROM	dbc.columnsv c
   INNER JOIN
   (SELECT databasename, TABLENAME
   FROM dbc.tablesv  WHERE tablekind = 'T'
   AND 	databasename = 'your_database_name') t
   ON t.databasename = c.databasename
   AND t.TABLENAME = c.TABLENAME
   INNER JOIN
   (SELECT databasename, TABLENAME, columnname FROM  dbc.indices GROUP BY 1,2,3
   WHERE  TRANSLATE_CHK (databasename USING LATIN_TO_UNICODE) + TRANSLATE_CHK (TABLENAME USING LATIN_TO_UNICODE) + TRANSLATE_CHK (columnname USING LATIN_TO_UNICODE) = 0
   ) i
   ON i.databasename = c.databasename
   AND i.TABLENAME = c.TABLENAME
   AND i.columnname = c.columnname
   WHERE c.ColumnType NOT IN ('CO','JN','N','++','VA','UT','AN','XM','A1','BO')
   ORDER BY c.TABLENAME, c.COLUMNNAME;
   
   .EXPORT RESET
   
   .EXPORT REPORT FILE = column-skew-stats-tera.csv
   .SET TITLEDASHES OFF
   .SET WIDTH 10000
   
   .RUN FILE = column-skew-script-tera.csv
   
   .EXPORT RESET
   
   .EXPORT REPORT FILE = query-stats-tera.csv
   .SET TITLEDASHES OFF
   .SET WIDTH 32000
   
   SELECT
     '"' || RTRIM(CAST(SqlTextInfo AS VARCHAR(31900)), ';') || '";"' ||
     TRIM(QueryCount) || '";"' ||
     TRIM(QueryId) || '";"' ||
     TRIM(SqlRowNo) || '";"' ||
     TRIM(QueryParts) || '";"' ||
     CAST(CURRENT_TIMESTAMP(0) AS VARCHAR(19)) || '"'
   (TITLE '"query_text";"query_count";"query_id";"sql_row_no";"query_parts";"current_ts"')
     FROM
     (
       SELECT  QueryId,  SqlTextInfo, SqlRowNo, QueryParts, QueryCount,
       SUM(QueryFirstRow) OVER (ORDER BY QueryCount DESC, QueryId ASC, SqlRowNo ASC
       ROWS UNBOUNDED PRECEDING) AS topN
       FROM
       (SELECT QueryId,  SqlTextInfo, SqlRowNo, QueryParts, QueryCount,
         CASE WHEN
         ROW_NUMBER() OVER (PARTITION BY QueryCount, SqlTextInfo ORDER BY QueryId, SqlRowNo) = 1 AND SqlRowNo = 1
       THEN 1 ELSE 0 END AS QueryFirstRow
       FROM (
         SELECT q.QueryId,  q.SqlTextInfo, q.SqlRowNo,
         MAX(q.SqlRowNo) OVER (PARTITION BY q.QueryId) QueryParts,
         COUNT(q.SqlTextInfo) OVER (PARTITION BY q.SqlTextInfo) QueryCount
         FROM DBC.dbqlsqltbl q
         INNER JOIN
         (
           SELECT QueryId
           FROM DBC.DBQLogTbl t
           WHERE TRIM(t.StatementType) IN ('SELECT')
           AND TRIM(t.AbortFlag) = '' AND t.ERRORCODE = 0
           AND 	(CASE WHEN 'All users' IN ('All users') THEN 'All users' ELSE TRIM(t.USERNAME) END) IN ('All users') --user_name list
           AND t.StartTime > CURRENT_TIMESTAMP - INTERVAL '30' DAY
           GROUP BY 1
         ) t
         ON q.QueryId = t.QueryId
         INNER JOIN
         (
           SELECT QueryId
           FROM DBC.QryLogObjectsV
           WHERE ObjectDatabaseName = 'your_database_name'
           AND ObjectType = 'Tab'
           AND CollectTimeStamp > CURRENT_TIMESTAMP - INTERVAL '30' DAY
           GROUP BY 1
         ) r
         ON r.QueryId = t.QueryId
         WHERE q.CollectTimeStamp > CURRENT_TIMESTAMP - INTERVAL '30' DAY
       ) t
     ) t
     WHERE SqlTextInfo NOT LIKE '%";"%'
     ) q
     WHERE
     QueryParts >=1
     AND topN <= 50
     ORDER BY QueryCount DESC, QueryId, SqlRowNo
     QUALIFY COUNT(QueryId) OVER (PARTITION BY QueryId) = QueryParts;
   
   .EXPORT RESET
   
   .LOGOFF
   
   .QUIT
   ```

1. Create the `td_run_bteq.bat` file that runs the BTEQ script that you created in the previous step. Use the following content for this file.

   ```
   @echo off > off-line_stats1.bteq & setLocal enableDELAYedexpansion
   @echo off > off-line_stats2.bteq & setLocal enableDELAYedexpansion
   
   set old1=your_teradata_server
   set new1=%1
   set old2=your_login
   set new2=%2
   set old3=your_database_name
   set new3=%3
   set old4=your_password
   set /p new4=Input %2 pass?
   
   for /f "tokens=* delims= " %%a in (off-line_stats.bteq) do (
   set str1=%%a
   set str1=!str1:%old1%=%new1%!
   >> off-line_stats1.bteq echo !str1!
   )
   
   for /f "tokens=* delims= " %%a in (off-line_stats1.bteq) do (
   set str2=%%a
   set str2=!str2:%old2%=%new2%!
   >> off-line_stats2.bteq echo !str2!
   )
   
   type nul > off-line_stats1.bteq
   
   for /f "tokens=* delims= " %%a in (off-line_stats2.bteq) do (
   set str3=%%a
   set str3=!str3:%old3%=%new3%!
   >> off-line_stats1.bteq echo !str3!
   )
   
   type nul > off-line_stats2.bteq
   
   for /f "tokens=* delims= " %%a in (off-line_stats1.bteq) do (
   set str4=%%a
   set str4=!str4:%old4%=%new4%!
   >> off-line_stats2.bteq echo !str4!
   )
   
   del .\off-line_stats1.bteq
   
   echo export starting...
   
   bteq -c UTF8 < off-line_stats.bteq > metadata_export.log
   
   pause
   ```

1. Create the `runme.bat` file that runs the batch file that you created in the previous step. Use the following content for this file.

   ```
   .\td_run_bteq.bat ServerName UserName DatabaseName
   ```

   In the `runme.bat` file, replace *ServerName*, *UserName*, and *DatabaseName* with your applicable values.

   Then, run the `runme.bat` file. Repeat this step for each data warehouse that you convert to Amazon Redshift.

After you run this script, you receive three files with statistics for each database. You can upload these files to your AWS SCT project. To do so, choose your data warehouse from the left panel of your project, and open the context (right-click) menu. Choose **Upload Statistics**.

## Teradata to Amazon Redshift conversion settings
<a name="CHAP_Source.Teradata.ConversionSettings"></a>

To edit Teradata to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Teradata**, and then choose **Teradata – Amazon Redshift**. AWS SCT displays all available settings for Teradata to Amazon Redshift conversion.

Teradata to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To migrate partitions of the source table to separate tables in Amazon Redshift. To do so, select **Use the UNION ALL view** and enter the maximum number of target tables that AWS SCT can create for a single source table.

  Amazon Redshift doesn't support table partitioning. To emulate this behavior and make queries run faster, AWS SCT can migrate each partition of your source table to a separate table in Amazon Redshift. Then, AWS SCT creates a view that includes data from all these tables.

  AWS SCT automatically determines the number of partitions in your source table. Depending on the type of source table partitioning, this number can exceed the quota for the tables that you can apply to your Amazon Redshift cluster. To avoid reaching this quota, enter the maximum number of target tables that AWS SCT can create for partitions of a single source table. The default option is 368 tables, which represents a partition for 366 days of a year and two tables for `NO RANGE` and `UNKNOWN` partitions.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.
+ To use an explicit list of columns in converted code for `SELECT *` statements, select **Use explicit column declaration**.
+ To emulate the behavior of primary and unique keys in your Amazon Redshift cluster, select **Emulate the behavior of primary and unique keys**.

  Amazon Redshift doesn't enforce unique and primary keys and uses them for informational purposes only. If you use these constraints in your code, then make sure that AWS SCT emulates their behavior in the converted code.
+ To ensure data uniqueness in the target Amazon Redshift tables. To do so, select **Emulate the behavior of SET tables**.

  Teradata creates tables using the `SET` syntax element as a default option. You can't add duplicate rows in a `SET` table. If your source code doesn't use this uniqueness constraint, then turn off this option. In this case, the converted code works faster.

  If your source code uses the `SET` option in tables as a uniqueness constraint, turn on this option. In this case, AWS SCT rewrites `INSERT..SELECT` statements in the converted code to emulate the behavior of your source database.

## Teradata to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.Teradata.ConversionOptimizationSettings"></a>

To edit Teradata to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Teradata**, and then choose **Teradata – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for Teradata to Amazon Redshift conversion.

Teradata to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.

# Connecting the AWS Schema Conversion Tool to Vertica databases
<a name="CHAP_Source.Vertica"></a>

You can use AWS SCT to convert schemas, code objects, and application code from Vertica to Amazon Redshift.

## Privileges for Vertica as a source
<a name="CHAP_Source.Vertica.Permissions"></a>

The following privileges are required for using Vertica as a source:
+ USAGE ON SCHEMA *<schema\$1name>* 
+ USAGE ON SCHEMA PUBLIC 
+ SELECT ON ALL TABLES IN SCHEMA *<schema\$1name>* 
+ SELECT ON ALL SEQUENCES IN SCHEMA *<schema\$1name>* 
+ EXECUTE ON ALL FUNCTIONS IN SCHEMA *<schema\$1name>* 
+ EXECUTE ON PROCEDURE *<schema\$1name.procedure\$1name(procedure\$1signature)>* 

In the preceding example, replace placeholders as following:
+ Replace *schema\$1name* with the name of the source schema.
+ Replace *procedure\$1name* with the name of a source procedure. Repeat the grant for each procedure that you are converting. 
+ Replace *procedure\$1signature* with the comma-delimited list of procedure argument types.

## Connecting to Vertica as a source
<a name="CHAP_Source.Vertica.Connecting"></a>

Use the following procedure to connect to your Vertica source database with the AWS Schema Conversion Tool.

**To connect to a Vertica source database**

1. In the AWS Schema Conversion Tool, choose **Add source**.

1. Choose **Vertica**, then choose **Next**.

   The **Add source** dialog box appears.

1. For **Connection name**, enter a name for your database. AWS SCT displays this name in the tree in the left panel. 

1. Use database credentials from AWS Secrets Manager or enter them manually:
   + To use database credentials from Secrets Manager, use the following instructions:

     1. For **AWS Secret**, choose the name of the secret.

     1. Choose **Populate** to automatically fill in all values in the database connection dialog box from Secrets Manager.

     For information about using database credentials from Secrets Manager, see [Configuring AWS Secrets Manager in the AWS Schema Conversion Tool](CHAP_UserInterface.SecretsManager.md).
   + To enter the Vertica source database connection information manually, use the following instructions:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Vertica.html)

1. Choose **Test Connection** to verify that AWS SCT can connect to your source database.

1. Choose **Connect** to connect to your source database.

## Vertica to Amazon Redshift conversion settings
<a name="CHAP_Source.Vertica.ConversionSettings"></a>

To edit Vertica to Amazon Redshift conversion settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Vertica**, and then choose **Vertica – Amazon Redshift**. AWS SCT displays all available settings for Vertica to Amazon Redshift conversion.

Vertica to Amazon Redshift conversion settings in AWS SCT include options for the following:
+ To limit the number of comments with action items in the converted code.

  For **Add comments in the converted code for the action items of selected severity and higher**, choose the severity of action items. AWS SCT adds comments in the converted code for action items of the selected severity and higher.

  For example, to minimize the number of comments in your converted code, choose **Errors only**. To include comments for all action items in your converted code, choose **All messages**.
+ To set the maximum number of tables that AWS SCT can apply to your target Amazon Redshift cluster.

  For **The maximum number of tables for the target Amazon Redshift cluster**, choose the number of tables that AWS SCT can apply to your Amazon Redshift cluster.

  Amazon Redshift has quotas that limit the use tables for different cluster node types. If you choose **Auto**, AWS SCT determines the number of tables to apply to your target Amazon Redshift cluster depending on the node type. Optionally, choose the value manually. For more information, see [Quotas and limits in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/amazon-redshift-limits.html) in the *Amazon Redshift Management Guide*.

  AWS SCT converts all your source tables, even if this is more than your Amazon Redshift cluster can store. AWS SCT stores the converted code in your project and doesn't apply it to the target database. If you reach the Amazon Redshift cluster quota for the tables when you apply the converted code, then AWS SCT displays a warning message. Also, AWS SCT applies tables to your target Amazon Redshift cluster until the number of tables reaches the limit.
+ To migrate partitions of the source table to separate tables in Amazon Redshift. To do so, select **Use the UNION ALL view** and enter the maximum number of target tables that AWS SCT can create for a single source table.

  Amazon Redshift doesn't support table partitioning. To emulate this behavior and make queries run faster, AWS SCT can migrate each partition of your source table to a separate table in Amazon Redshift. Then, AWS SCT creates a view that includes data from all these tables.

  AWS SCT automatically determines the number of partitions in your source table. Depending on the type of source table partitioning, this number can exceed the quota for the tables that you can apply to your Amazon Redshift cluster. To avoid reaching this quota, enter the maximum number of target tables that AWS SCT can create for partitions of a single source table. The default option is 368 tables, which represents a partition for 366 days of a year and two tables for `NO RANGE` and `UNKNOWN` partitions.
+ To apply compression to Amazon Redshift table columns. To do so, select **Use compression encoding**.

  AWS SCT assigns compression encoding to columns automatically using the default Amazon Redshift algorithm. For more information, see [Compression encodings](https://docs.aws.amazon.com/redshift/latest/dg/c_Compression_encodings.html) in the *Amazon Redshift Database Developer Guide*.

  By default, Amazon Redshift doesn't apply compression to columns that are defined as sort and distribution keys. You can change this behavior and apply compression to these columns. To do so, select **Use compression encoding for KEY columns**. You can select this option only when you select the **Use compression encoding** option.

## Vertica to Amazon Redshift conversion optimization settings
<a name="CHAP_Source.Vertica.ConversionOptimizationSettings"></a>

To edit Vertica to Amazon Redshift conversion optimization settings, choose **Settings** in AWS SCT, and then choose **Conversion settings**. From the upper list, choose **Vertica**, and then choose **Vertica – Amazon Redshift**. In the left pane, choose **Optimization strategies**. AWS SCT displays conversion optimization settings for Vertica to Amazon Redshift conversion.

Vertica to Amazon Redshift conversion optimization settings in AWS SCT include options for the following:
+ To work with automatic table optimization. To do so, select **Use Amazon Redshift automatic table tuning**.

  Automatic table optimization is a self-tuning process in Amazon Redshift that automatically optimizes the design of tables. For more information, see [Working with automatic table optimization](https://docs.aws.amazon.com/redshift/latest/dg/t_Creating_tables.html) in the *Amazon Redshift Database Developer Guide*.

  To rely only on the automatic table optimization, choose **None** for **Initial key selection strategy**.
+ To choose sort and distribution keys using your strategy.

  You can choose sort and distribution keys using Amazon Redshift metadata, statistical information, or both these options. For **Initial key selection strategy** on the **Optimization strategies** tab, choose one of the following options:
  + Use metadata, ignore statistical information
  + Ignore metadata, use statistical information
  + Use metadata and statistical information

  Depending on the option that you choose, you can select optimization strategies. Then, for each strategy, enter the value (0–100). These values define the weight of each strategy. Using these weight values, AWS SCT defines how each rule influences on the choice of distribution and sort keys. The default values are based on the AWS migration best practices.

  You can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to define it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.
+ To configure strategy details.

  In addition to defining the weight for each optimization strategy, you can configure the optimization settings. To do so, choose **Conversion optimization**. 
  + For **Sort key columns limit**, enter the maximum number of columns in the sort key.
  + For **Skewed threshold value**, enter the percentage (0–100) of a skewed value for a column. AWS SCT excludes columns with the skew value greater than the threshold from the list of candidates for the distribution key. AWS SCT defines the skewed value for a column as the percentage ratio of the number of occurrences of the most common value to the total number of records.
  + For **Top N queries from the query history table**, enter the number (1–100) of the most frequently used queries to analyze.
  + For **Select statistics user**, choose the database user for which you want to analyze the query statistics.

  Also, on the **Optimization strategies** tab, you can define the size of small tables for the **Find small tables** strategy. For **Min table row count** and **Max table row count**, enter the minimum and maximum number of rows in a table to consider it as a small table. AWS SCT applies the `ALL` distribution style to small tables. In this case, a copy of the entire table is distributed to every node.