

# Re-architect
<a name="migration-rearchitect-pattern-list"></a>

**Topics**
+ [Convert VARCHAR2(1) data type for Oracle to Boolean data type for Amazon Aurora PostgreSQL](convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql.md)
+ [Create application users and roles in Aurora PostgreSQL-Compatible](create-application-users-and-roles-in-aurora-postgresql-compatible.md)
+ [Emulate Oracle DR by using a PostgreSQL-compatible Aurora global database](emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database.md)
+ [Implement SHA1 hashing for PII data when migrating from SQL Server to PostgreSQL](implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.md)
+ [Incrementally migrate from Amazon RDS for Oracle to Amazon RDS for PostgreSQL using Oracle SQL Developer and AWS SCT](incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct.md)
+ [Load BLOB files into TEXT by using file encoding in Aurora PostgreSQL-Compatible](load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible.md)
+ [Migrate Amazon RDS for Oracle to Amazon RDS for PostgreSQL with AWS SCT and AWS DMS using AWS CLI and CloudFormation](migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation.md)
+ [Migrate Amazon RDS for Oracle to Amazon RDS for PostgreSQL in SSL mode by using AWS DMS](migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.md)
+ [Migrate Oracle SERIALLY\$1REUSABLE pragma packages into PostgreSQL](migrate-oracle-serially-reusable-pragma-packages-into-postgresql.md)
+ [Migrate Oracle external tables to Amazon Aurora PostgreSQL-Compatible](migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible.md)
+ [Migrate function-based indexes from Oracle to PostgreSQL](migrate-function-based-indexes-from-oracle-to-postgresql.md)
+ [Migrate Oracle native functions to PostgreSQL using extensions](migrate-oracle-native-functions-to-postgresql-using-extensions.md)
+ [Migrate a Db2 database from Amazon EC2 to Aurora MySQL-Compatible by using AWS DMS](migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.md)
+ [Migrate a Microsoft SQL Server database from Amazon EC2 to Amazon DocumentDB by using AWS DMS](migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms.md)
+ [Migrate an on-premises ThoughtSpot Falcon database to Amazon Redshift](migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift.md)
+ [Migrate from Oracle Database to Amazon RDS for PostgreSQL by using Oracle GoldenGate](migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate.md)
+ [Migrate an Oracle partitioned table to PostgreSQL by using AWS DMS](migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms.md)
+ [Migrate from Amazon RDS for Oracle to Amazon RDS for MySQL](migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql.md)
+ [Migrate from IBM Db2 on Amazon EC2 to Aurora PostgreSQL-Compatible using AWS DMS and AWS SCT](migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct.md)
+ [Migrate from Oracle 8i or 9i to Amazon RDS for PostgreSQL using SharePlex and AWS DMS](migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms.md)
+ [Migrate from Oracle 8i or 9i to Amazon RDS for PostgreSQL using materialized views and AWS DMS](migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms.md)
+ [Migrate from Oracle on Amazon EC2 to Amazon RDS for MySQL using AWS DMS and AWS SCT](migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct.md)
+ [Migrate an Oracle database from Amazon EC2 to Amazon RDS for MariaDB using AWS DMS and AWS SCT](migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct.md)
+ [Migrate an on-premises Oracle database to Amazon RDS for MySQL using AWS DMS and AWS SCT](migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct.md)
+ [Migrate an on-premises Oracle database to Amazon RDS for PostgreSQL by using an Oracle bystander and AWS DMS](migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms.md)
+ [Migrate an Oracle Database to Amazon Redshift using AWS DMS and AWS SCT](migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct.md)
+ [Migrate an Oracle database to Aurora PostgreSQL using AWS DMS and AWS SCT](migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct.md)
+ [Migrate data from an on-premises Oracle database to Aurora PostgreSQL](migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql.md)
+ [Migrate from SAP ASE to Amazon RDS for SQL Server using AWS DMS](migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms.md)
+ [Migrate an on-premises Microsoft SQL Server database to Amazon Redshift using AWS DMS](migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms.md)
+ [Migrate an on-premises Microsoft SQL Server database to Amazon Redshift using AWS SCT data extraction agents](migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents.md)
+ [Migrate legacy applications from Oracle Pro\$1C to ECPG](migrate-legacy-applications-from-oracle-pro-c-to-ecpg.md)
+ [Migrate virtual generated columns from Oracle to PostgreSQL](migrate-virtual-generated-columns-from-oracle-to-postgresql.md)
+ [Set up Oracle UTL\$1FILE functionality on Aurora PostgreSQL-Compatible](set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible.md)
+ [Validate database objects after migrating from Oracle to Amazon Aurora PostgreSQL](validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql.md)

# Convert VARCHAR2(1) data type for Oracle to Boolean data type for Amazon Aurora PostgreSQL
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql"></a>

*Naresh Damera, Amazon Web Services*

## Summary
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql-summary"></a>

During a migration from Amazon Relational Database Service (Amazon RDS) for Oracle to Amazon Aurora PostgreSQL-Compatible Edition, you might encounter a data mismatch when validating the migration in AWS Database Migration Service (AWS DMS). To prevent this mismatch, you can convert VARCHAR2(1) data type to Boolean data type.

VARCHAR2 data type stores variable-length text strings, and VARCHAR2(1) indicates that the string is 1 character in length or 1 byte. For more information about VARCHAR2, see [Oracle built-in data types](https://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF30020) (Oracle documentation).

In this pattern, in the sample source data table column, the VARCHAR2(1) data is either a **Y**, for *Yes*, or **N**, for *No*.  This pattern includes instructions for using AWS DMS and AWS Schema Conversion Tool (AWS SCT) to convert this data type from the **Y** and **N** values in VARCHAR2(1) to **true** or **false** values in Boolean.

**Intended audience**

This pattern is recommended for those who have experience migrating Oracle databases to Aurora PostgreSQL-Compatible by using AWS DMS. As you complete the migration, adhere to the recommendations in [Converting Oracle to Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.ToPostgreSQL.html) (AWS SCT documentation).

## Prerequisites and limitations
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ Confirm that your environment is prepared for Aurora, including setting up credentials, permissions, and a security group. For more information, see [Setting up your environment for Amazon Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_SettingUp_Aurora.html) (Aurora documentation).
+ A source Amazon RDS for Oracle database that contains a table column with VARCHAR2(1) data.
+ A target Amazon Aurora PostgreSQL-Compatible database instance. For more information, see [Creating a database cluster and connecting to a database on an Aurora PostgreSQL database cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_GettingStartedAurora.CreatingConnecting.AuroraPostgreSQL.html#CHAP_GettingStarted.AuroraPostgreSQL.CreateDBCluster) (Aurora documentation).

**Product versions**
+ Amazon RDS for Oracle version 12.1.0.2 or later.
+ AWS DMS version 3.1.4 or later. For more information, see [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) and [Using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html) (AWS DMS documentation). We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support.
+ AWS Schema Conversion Tool (AWS SCT) version 1.0.632 or later. We recommend that you use the latest version of AWS SCT for the most comprehensive version and feature support.
+ Aurora supports the PostgreSQL versions listed in [Database Engine Versions for Aurora PostgreSQL-Compatible](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html) (Aurora documentation).

## Architecture
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql-architecture"></a>

**Source technology stack**

Amazon RDS for Oracle database instance

**Target technology stack**

Amazon Aurora PostgreSQL-Compatible database instance

**Source and target architecture**

![\[Changing data types from VARCHAR2(1) to Boolean\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5d4dc568-20d8-4883-a942-21c81039d8e6/images/9fd82ae2-56e6-439c-b4cd-9e74fe77b480.png)


## Tools
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql-tools"></a>

**AWS services**
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.
+ [Amazon Relational Database Service (Amazon RDS) for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale an Oracle relational database in the AWS Cloud.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format compatible with the target database.

**Other services**
+ [Oracle SQL Developer](https://docs.oracle.com/en/database/oracle/sql-developer/) is an integrated development environment that simplifies the development and management of Oracle databases in both traditional and cloud-based deployments. In this pattern, you use this tool to connect to the Amazon RDS for Oracle database instance and query the data.
+ [pgAdmin](https://www.pgadmin.org/docs/) is an open-source management tool for PostgreSQL. It provides a graphical interface that helps you create, maintain, and use database objects. In this pattern, you use this tool to connect to the Aurora database instance and query the data.

## Epics
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql-epics"></a>

### Prepare for the migration
<a name="prepare-for-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create database migration report. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql.html) | DBA, Developer | 
| Disable foreign key constraints on the target database. | In PostgreSQL, foreign keys are implemented by using triggers. During the full load phase, AWS DMS loads each table one at a time. We strongly recommend that you disable foreign key constraints during a full load by using one of the following methods:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql.html)If disabling foreign key constraints is not feasible, create an AWS DMS migration task for the primary data that is specific to the parent table and child table. | DBA, Developer | 
| Disable the primary keys and unique keys on the target database. | Using the following commands, disable the primary keys and constraints on the target database. This helps improve the performance of the initial load task.<pre>ALTER TABLE <table> DISABLE PRIMARY KEY;</pre><pre>ALTER TABLE <table> DISABLE CONSTRAINT <constraint_name>;</pre> | DBA, Developer | 
| Create the initial load task. | In AWS DMS, create the migration task for the initial load. For instructions, see [Creating a task](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html). For the migration method, choose **Migrate existing data**. This migration method is** **called `Full Load` in the API. Do not start this task yet. | DBA, Developer | 
| Edit task settings for the initial load task. | Edit the task settings to add data validation. These validation settings must be created in a JSON file. For instructions and examples, see [Specifying task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html). Add the following validations:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql.html)To validate the rest of the data migration, enable data validation in the task. For more information, see [Data validation task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.DataValidation.html). | AWS administrator, DBA | 
| Create the ongoing replication task. | In AWS DMS, create the migration task that keeps the target database in sync with the source database. For instructions, see [Creating a task](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html). For the migration method, choose **Replicate data changes only**. Do not start this task yet. | DBA | 

### Test the migration tasks
<a name="test-the-migration-tasks"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create sample data for testing. | In the source database, create a sample table with data for testing purposes. | Developer | 
| Confirm there are no conflicting activities. | Use the `pg_stat_activity` to check for any activity on the server that might affect the migration. For more information, see [The Statistics Collector](https://www.postgresql.org/docs/current/monitoring-stats.html) (PostgreSQL documentation). | AWS administrator | 
| Start the AWS DMS migration tasks. | In the AWS DMS console, on the **Dashboard** page, start the initial load and ongoing replication tasks that you created in the previous epic. | AWS administrator | 
| Monitor the tasks and table load states. | During the migration, monitor the [task status](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html#CHAP_Tasks.Status) and the [table states](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html#CHAP_Tasks.CustomizingTasks.TableState). When the initial load task is complete, on the **Table statistics** tab:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql.html) | AWS administrator | 
| Verify the migration results. | Using pgAdmin, query the table on target database. A successful query indicates that the data was migrated successfully. | Developer | 
| Add primary keys and foreign keys on the target database. | Create the primary key and foreign key on the target database. For more information, see [ALTER TABLE](https://www.postgresql.org/docs/current/sql-altertable.html) (PostgreSQL website). | DBA | 
| Clean up the test data. | On the source and target databases, clean up data that was created for unit testing. | Developer | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Complete the migration. | Repeat the previous epic, *Test the migration tasks*, using the real source data. This migrates the data from the source to the target database. | Developer | 
| Validate that the source and target databases are in sync. | Validate that the source and target databases are in sync. For more information and instructions, see [AWS DMS data validation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html). | Developer | 
| Stop the source database. | Stop the Amazon RDS for Oracle database. For instructions, see [Stopping an Amazon RDS DB instance temporarily](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html). When you stop the source database, the initial load and ongoing replication tasks in AWS DMS are automatically stopped. No additional action is required to stop these tasks. | Developer | 

## Related resources
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql-resources"></a>

**AWS references**
+ [Migrate an Oracle database to Aurora PostgreSQL using AWS DMS and AWS SCT](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct.html) (AWS Prescriptive Guidance)
+ [Converting Oracle to Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.ToPostgreSQL.html) (AWS SCT documentation)
+ [How AWS DMS Works](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.html) (AWS DMS documentation)

**Other references**
+ [Boolean data type](https://www.postgresqltutorial.com/postgresql-tutorial/postgresql-boolean/) (PostgreSQL documentation)
+ [Oracle built-in data types](https://docs.oracle.com/database/121/SQLRF/sql_elements001.htm#SQLRF30020) (Oracle documentation)
+ [pgAdmin](https://www.pgadmin.org/) (pgAdmin website)
+ [SQL Developer](https://www.oracle.com/database/technologies/appdev/sql-developer.html) (Oracle website)

**Tutorials and videos**
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/)
+ [Getting Started With Amazon RDS](https://aws.amazon.com/rds/getting-started/)
+ [Introduction to AWS DMS](https://www.youtube.com/watch?v=ouia1Sc5QGo) (Video)
+ [Understanding Amazon RDS](https://www.youtube.com/watch?v=eMzCI7S1P9M) (Video)

## Additional information
<a name="convert-varchar2-1-data-type-for-oracle-to-boolean-data-type-for-amazon-aurora-postgresql-additional"></a>

**Data validation script**

The following data validation script converts **1** to **Y** and **0** to **N**. This helps the AWS DMS task successfully complete and pass the table validation.

```
{
"rule-type": "validation",
"rule-id": "5",
"rule-name": "5",
"rule-target": "column",
"object-locator": {
"schema-name": "ADMIN",
"table-name": "TEMP_CHRA_BOOL",
"column-name": "GRADE"
},
"rule-action": "override-validation-function",
"target-function": "case grade when '1' then 'Y' else 'N' end"
        }
```

The `case` statement in the script performs the validation. If validation fails, AWS DMS inserts a record in the **public.awsdms\$1validation\$1failures\$1v1** table on the target database instance. This record includes the table name, error time, and details about the mismatched values in the source and target tables.

If you do not add this data validation script to the AWS DMS task and the data is inserted in the target table, the AWS DMS task shows validation state as **Mismatched Records**.

During AWS SCT conversion, the AWS DMS migration task changes the data type of VARCHAR2(1) data type to Boolean and adds a primary key constraint on the `"NO"` column.

# Create application users and roles in Aurora PostgreSQL-Compatible
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible"></a>

*Abhishek Verma, Amazon Web Services*

## Summary
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible-summary"></a>

When you migrate to Amazon Aurora PostgreSQL-Compatible Edition, the database users and roles that exist on the source database must be created in the Aurora PostgreSQL-Compatible database. You can create the users and roles in Aurora PostgreSQL-Compatible by using two different approaches:
+ Use similar users and roles in the target as in the source database. In this approach, the data definition languages (DDLs) are extracted for users and roles from the source database. Then they are transformed and applied to the target Aurora PostgreSQL-Compatible database. For example, the blog post [Use SQL to map users, roles, and grants from Oracle to PostgreSQL](https://aws.amazon.com/blogs/database/use-sql-to-map-users-roles-and-grants-from-oracle-to-postgresql) covers using extraction from an Oracle source database engine.
+ Use standardized users and roles that are commonly used during development, administration, and for performing other related operations in the database. This includes read-only, read/write, development, administration, and deployment operations performed by the respective users.

This pattern contains the grants required for users and roles creation in Aurora PostgreSQL-Compatible required for the standardized users and roles approach. The user and role creation steps are aligned to the security policy of granting least privilege to the database users. The following table lists the users, their corresponding roles, and their details on the database.


| 
| 
| Users | Roles | Purpose | 
| --- |--- |--- |
| `APP_read` | `APP_RO` | Used for read-only access on the schema `APP` | 
| `APP_WRITE` | `APP_RW` | Used for the write and read operations on the schema `APP` | 
| `APP_dev_user` | `APP_DEV` | Used for the development purpose on the schema `APP_DEV`, with read-only access on the schema `APP` | 
| `Admin_User` | `rds_superuser` | Used for performing administrator operations on the database | 
| `APP` | `APP_DEP` | Used for creating the objects under the `APP` schema and for deployment of objects in the `APP` schema | 

## Prerequisites and limitations
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Services (AWS) account
+ A PostgreSQL database, Amazon Aurora PostgreSQL-Compatible Edition database, or Amazon Relational Database Service (Amazon RDS) for PostgreSQL database

**Product versions**
+ All versions of PostgreSQL

## Architecture
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible-architecture"></a>

**Source technology stack  **
+ Any database

**Target technology stack  **
+ Amazon Aurora PostgreSQL-Compatible

**Target architecture**

The following diagram shows user roles and the schema architecture in the Aurora PostgreSQL-Compatible database.

![\[User roles and schema architecture for the Aurora PostgreSQL-Comaptible database.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/80105a81-e3d1-4258-b3c1-77f3a5e78592/images/b95cb9bc-8bf7-47d1-92e7-66cfb37d7ce7.png)


                                                                                                                                    

**Automation and scale**

This pattern contains the users, roles, and schema creation script, which you can run multiple times without any impact to existing users of the source or target database.

## Tools
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible-tools"></a>

**AWS services**
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.

**Other services**
+ [psql](https://www.postgresql.org/docs/current/app-psql.html) is a terminal-based front-end tool that is installed with every PostgreSQL Database installation. It has a command line interface for running SQL, PL-PGSQL, and operating system commands.
+ [pgAdmin](https://www.pgadmin.org/) is an open-source management tool for PostgreSQL. It provides a graphical interface that helps you create, maintain, and use database objects.

## Epics
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible-epics"></a>

### Create the users and roles
<a name="create-the-users-and-roles"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the deployment user. | The deployment user `APP` will be used to create and modify the database objects during deployments. Use the following scripts to create the deployment user role `APP_DEP` in the schema `APP`. Validate access rights to make sure this user has only the privilege to create objects in the required schema `APP`.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-application-users-and-roles-in-aurora-postgresql-compatible.html) | DBA | 
| Create the read-only user. | The read-only user `APP_read` will be used for performing the read-only operation in the schema `APP`. Use the following scripts to create the read-only user. Validate access rights to make sure that this user has privilege only to read the objects in the schema `APP` and for automatically granting read access for any new objects created in the schema `APP`.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-application-users-and-roles-in-aurora-postgresql-compatible.html) | DBA | 
| Create the read/write user. | The read/write user `APP_WRITE` will be used to perform read and write operations on the schema `APP`. Use the following scripts to create the read/write user and grant it the `APP_RW` role. Validate access rights to make sure that this user has read and write privileges only on the objects in the schema `APP` and for automatically granting read and write access for any new object created in schema `APP`.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-application-users-and-roles-in-aurora-postgresql-compatible.html) |  | 
| Create the admin user. | The admin user `Admin_User` will be used to perform admin operations on the database. Examples of these operations are `CREATE ROLE` and `CREATE DATABASE`. `Admin_User` uses the built-in role `rds_superuser` to perform admin operations on the database. Use the following scripts to create and test the privilege for the admin user `Admin_User` in the database.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-application-users-and-roles-in-aurora-postgresql-compatible.html) | DBA | 
| Create the development user. | The development user `APP_dev_user` will have rights to create the objects in its local schema `APP_DEV` and read access in the schema `APP`. Use the following scripts to create and test the privileges of the user `APP_dev_user` in the database.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-application-users-and-roles-in-aurora-postgresql-compatible.html) | DBA | 

## Related resources
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible-resources"></a>

**PostgreSQL documentation**
+ [CREATE ROLE](https://www.postgresql.org/docs/9.1/sql-createrole.html)
+ [CREATE USER](https://www.postgresql.org/docs/8.0/sql-createuser.html)
+ [Predefined Roles](https://www.postgresql.org/docs/14/predefined-roles.html)

 

## Additional information
<a name="create-application-users-and-roles-in-aurora-postgresql-compatible-additional"></a>

**PostgreSQL 14 enhancement**

PostgreSQL 14 provides a set of predefined roles that give access to certain commonly needed, privileged capabilities and information. Administrators (including roles that have the `CREATE ROLE` privilege) can grant these roles or other roles in their environment to users, providing them with access to the specified capabilities and information.

Administrators can grant users access to these roles using the `GRANT` command. For example, to grant the `pg_signal_backend` role to `Admin_User`, you can run the following command.

```
GRANT pg_signal_backend TO Admin_User;
```

The `pg_signal_backend` role is intended to allow administrators to enable trusted, non-superuser roles to send signals to other backends. For more information, see [PostgreSQL 14 enhancement](https://www.postgresql.org/docs/14/predefined-roles.html).

**Fine-tuning access**

In some cases, it might be necessary to provide more granular access to the users (for example, table-based access or column-based access). In such cases, additional roles can be created to grant those privileges to the users. For more information, see [PostgreSQL Grants](https://www.postgresql.org/docs/8.4/sql-grant.html).

# Emulate Oracle DR by using a PostgreSQL-compatible Aurora global database
<a name="emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database"></a>

*HariKrishna Boorgadda, Amazon Web Services*

## Summary
<a name="emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database-summary"></a>

Best practices for Enterprise disaster recovery (DR) basically consist of designing and implementing fault-tolerant hardware and software systems that can survive a disaster (*business continuance*) and resume normal operations (*business resumption*), with minimal intervention and, ideally, with no data loss. Building fault-tolerant environments to satisfy Enterprise DR objectives can be expensive and time consuming and requires a strong commitment by the business.

Oracle Database provides three different approaches to DR that provide the highest level of data protection and availability compared to any other approach for protecting Oracle data.
+ Oracle Zero Data Loss Recovery Appliance
+ Oracle Active Data Guard
+ Oracle GoldenGate

This pattern provides a way to emulate the Oracle GoldenGate DR by using an Amazon Aurora global database. The reference architecture uses Oracle GoldenGate for DR across three AWS Regions. The pattern walks through the replatforming of the source architecture to the cloud-native Aurora global database based on Amazon Aurora PostgreSQL–Compatible Edition.

Aurora global databases are designed for applications with a global footprint. A single Aurora database spans multiple AWS Regions with as many as five secondary Regions. Aurora global databases provide the following features:
+ Physical storage-level replication
+ Low-latency global reads
+ Fast disaster recovery from Region-wide outages
+ Fast cross-Region migrations
+ Low replication lag across Regions
+ Little-to-no performance impact on your database

For more information about Aurora global database features and advantages, see [Using Amazon Aurora global databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database-overview). For more information about unplanned and managed failovers, see [Using failover in an Amazon Aurora global database](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.html#aurora-global-database-failover).

## Prerequisites and limitations
<a name="emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database-prereqs"></a>

**Prerequisites **
+ An active AWS account 
+ A Java Database Connectivity (JDBC) PostgreSQL driver for application connectivity
+ An Aurora global database based on Amazon Aurora PostgreSQL-Compatible Edition
+ An Oracle Real Application Clusters (RAC) database migrated to the Aurora global database based on Aurora PostgreSQL–Compatible

**Limitations of Aurora global databases **
+ Aurora global databases aren’t available in all AWS Regions. For a list of supported Regions, see [Aurora global databases with Aurora PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.apg).
+ For information about features that aren’t supported and other limitations of Aurora global databases, see the [Limitations of Amazon Aurora global databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database.limitations).

**Product versions**
+ Amazon Aurora PostgreSQL–Compatible Edition version 10.14 or later

## Architecture
<a name="emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database-architecture"></a>

**Source technology stack****  **
+ Oracle RAC four-node database
+ Oracle GoldenGate

**Source architecture**** **

The following diagram shows three clusters with four-node Oracle RAC in different AWS Regions replicated using Oracle GoldenGate. 

![\[Oracle RAC in a primary Region and two secondary Regions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/11d4265b-31af-4ebf-a766-24196193ee01/images/9fc740fc-d339-422e-beaf-1f65690c9d14.png)


**Target technology stack  **
+ A three-cluster Amazon Aurora global database based on Aurora PostgreSQL–Compatible, with one cluster in the primary Region, two clusters in different secondary Regions

**Target architecture**

![\[Amazon Aurora in a primary Region and two secondary Regions.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/11d4265b-31af-4ebf-a766-24196193ee01/images/8e3deca9-03f2-437c-9341-795ac17e2b42.png)


## Tools
<a name="emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database-tools"></a>

**AWS services**
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.
+ [Amazon Aurora global databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html) span multiple AWS Regions, providing low latency global reads and fast recovery from the rare outage that might affect an entire AWS Region.

## Epics
<a name="emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database-epics"></a>

### Add Regions with reader DB instances
<a name="add-regions-with-reader-db-instances"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Attach one or more secondary Aurora clusters. | On the AWS Management Console, choose Amazon Aurora. Select the primary cluster, choose **Actions**, and choose **Add region** from the dropdown list. | DBA | 
| Select the instance class. | You can change the instance class of the secondary cluster. However, we recommend keeping it the same as the primary cluster instance class. | DBA | 
| Add the third Region. | Repeat the steps in this epic to add a cluster in the third Region. | DBA | 

### Fail over the Aurora global database
<a name="fail-over-the-aurora-global-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the primary cluster from the Aurora global database. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database.html) | DBA | 
| Reconfigure your application to divert write traffic to the newly promoted cluster. | Modify the endpoint in the application with that of the newly promoted cluster. | DBA | 
| Stop issuing any write operations to the unavailable cluster. | Stop the application and any data manipulation language (DML) activity to the cluster that you removed. | DBA | 
| Create a new Aurora global database. | Now you can create an Aurora global database with the newly promoted cluster as the primary cluster. | DBA | 

### Start the primary cluster
<a name="start-the-primary-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Select the primary cluster to be started from the global database. | On the Amazon Aurora console, in the Global Database setup, choose the primary cluster. | DBA | 
| Start the cluster. | On the **Actions** dropdown list, choose **Start**. This process might take some time. Refresh the screen to see the status, or check the **Status** column for the current state of the cluster after the operation is completed. | DBA | 

### Clean up the resources
<a name="clean-up-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the remaining secondary clusters. | After the failover pilot is completed, remove the secondary clusters from the global database. | DBA | 
| Delete the primary cluster. | Remove the cluster. | DBA | 

## Related resources
<a name="emulate-oracle-dr-by-using-a-postgresql-compatible-aurora-global-database-resources"></a>
+ [Using Amazon Aurora global databases](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database-detaching)
+ [Aurora PostgreSQL Disaster Recovery solutions using Amazon Aurora Global Database](https://aws.amazon.com/blogs/database/aurora-postgresql-disaster-recovery-solutions-using-amazon-aurora-global-database/) (blog post)

# Implement SHA1 hashing for PII data when migrating from SQL Server to PostgreSQL
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql"></a>

*Rajkumar Raghuwanshi and Jagadish Kantubugata, Amazon Web Services*

## Summary
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-summary"></a>

This pattern describes how to implement Secure Hash Algorithm 1 (SHA1) hashing for email addresses when migrating from SQL Server to either Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL-Compatible. An email address is an example of *personally identifiable information* (PII). PII is information that, when viewed directly or paired with other related data, can be used to reasonably infer the identity of an individual. 

This pattern covers the challenges of maintaining consistent hash values across different database collations and character encodings, and provides a solution using PostgreSQL functions and triggers. Although this pattern focuses on SHA1 hashing, it can be adapted for other hashing algorithms supported by PostgreSQL's `pgcrypto` module. Always consider the security implications of your hashing strategy and consult with security experts if handling sensitive data.

## Prerequisites and limitations
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Source SQL Server database
+ Target PostgreSQL database (Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible)
+ PL/pgSQL coding expertise

**Limitations**
+ This pattern requires database-level collation changes based on use cases.
+ The performance impact on large datasets has not been evaluated.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ Microsoft SQL Server 2012 or later

## Architecture
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-architecture"></a>

**Source technology stack **
+ SQL Server
+ .NET Framework

**Target technology stack **
+ PostgreSQL
+ `pgcrypto` extension

**Automation and scale**
+ Consider implementing the hashing function as a stored procedure for easier maintenance.
+ For large datasets, evaluate performance and consider batch processing or indexing strategies.

## Tools
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-tools"></a>

**AWS services**
+ [Amazon Aurora PostgreSQL-Compatible](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.
+ [Amazon Relational Database Service Amazon RDS for PostgreSQL ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html)helps you set up, operate, and scale a PostgreSQL relational database in the AWS Cloud.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database.

**Other tools**
+ [pgAdmin](https://www.pgadmin.org/) is an open source management tool for PostgreSQL. It provides a graphical interface that helps you create, maintain, and use database objects.
+ [SQL Server Management Studio (SSMS)](https://learn.microsoft.com/en-us/ssms/sql-server-management-studio-ssms) is an integrated environment for managing any SQL infrastructure.

## Best practices
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-best-practices"></a>
+ Use appropriate collation settings for handling special characters on the target database side.
+ Test thoroughly with a variety of email addresses, including addresses with non-ASCII characters.
+ Maintain consistency in uppercase and lowercase handling between the application and database layers.
+ Benchmark performance of queries using the hashed values.

## Epics
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-epics"></a>

### Analyze source hashing implementation
<a name="analyze-source-hashing-implementation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review SQL Server code. | To review SQL Server code that generates SHA1 hashes, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | Data engineer, DBA, App developer | 
| Document the hashing algorithm and data transformations. | To document the exact hashing algorithm and data transformations, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | App developer, Data engineer, DBA | 

### Create PostgreSQL hashing function
<a name="create-postgresql-hashing-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create `pgcrypto` extension. | To create the `pgcrypto` extension, use `pgAdmin/psql` to run the following command:<pre>CREATE EXTENSION pgcrypto;</pre> | DBA, Data engineer | 
| Implement a PostgreSQL function. | Implement the following PostgreSQL function to replicate the SQL Server hashing logic. At a high level, this function uses the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html)<pre>CREATE OR REPLACE FUNCTION utility.hex_to_bigint ( <br />     par_val character varying, <br />     par_upper character varying DEFAULT 'lower'::character varying) <br />RETURNS bigint <br />LANGUAGE 'plpgsql' <br />AS $BODY$ <br />DECLARE <br />    retnumber bigint; <br />    digest_bytes bytea;<br />BEGIN <br />    if lower(par_upper) = 'upper' <br />    then <br />        digest_bytes := digest(upper(par_val), 'sha1');<br />    else <br />        digest_bytes := digest((par_val), 'sha1');<br />    end if; <br />    retnumber := ('x' || encode(substring(digest_bytes, length(digest_bytes)-10+1), 'hex'))::bit(64)::bigint; <br />    RETURN retnumber; <br />END; <br />$BODY$;</pre> | Data engineer, DBA, App developer | 
| Test the function. | To test the function, use sample data from SQL Server to verify matching hash values. Run the following command:<pre>select 'alejandro_rosalez@example.com' as Email, utility.hex_to_bigint('alejandro_rosalez@example.com','upper') as HashValue;<br /><br />--OUTPUT<br />/*<br />email 	        hashvalue<br />"alejandro_rosalez@example.com"	451397011176045063<br />*/<br /></pre> | App developer, DBA, Data engineer | 

### Implement triggers for automatic hashing
<a name="implement-triggers-for-automatic-hashing"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create triggers on relevant tables. | To create triggers on relevant tables to automatically generate hash values on insert or update, run the following command:<pre>CREATE OR REPLACE FUNCTION update_email_hash() <br />RETURNS TRIGGER <br />AS $$ <br />BEGIN <br />    NEW.email_hash = utility.hex_to_bigint(NEW.email, 'upper'); <br />    RETURN NEW; <br />END; <br />$$ LANGUAGE plpgsql;</pre><pre>CREATE TRIGGER email_hash_trigger BEFORE INSERT OR UPDATE ON users FOR EACH ROW EXECUTE FUNCTION update_email_hash();</pre> | App developer, Data engineer, DBA | 

### Migrate existing data
<a name="migrate-existing-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Develop a migration script or use AWS DMS.  | Develop a migration script or use AWS DMS to populate hash values for existing data (including hash values stored as `BIGINT` in the source system.) Complete the following tasks:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | Data engineer, App developer, DBA | 
| Use the new PostgreSQL hashing function. | To use the new PostgreSQL hashing function to ensure consistency, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | App developer, DBA, DevOps engineer | 

### Update application queries
<a name="update-application-queries"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify application queries. | To identify application queries that use hashed values, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | App developer, DBA, Data engineer | 
| Modify queries. | If necessary, modify queries to use the new PostgreSQL hashing function. Do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | App developer, DBA, Data engineer | 

### Test and validate
<a name="test-and-validate"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Perform testing. | To perform thorough testing with a subset of production data, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | App developer, Data engineer, DBA | 
| Validate that hash values match. | To validate that hash values match between SQL Server and PostgreSQL, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | App developer, Data engineer, DBA | 
| Verify application functionality. | To verify application functionality by using the migrated data and the new hashing implementation, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql.html) | App developer, DBA, Data engineer | 

## Troubleshooting
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Hash values don’t match. | Verify character encodings and collations between source and target. For more information, see [Manage collation changes in PostgreSQL on Amazon Aurora and Amazon RDS](https://aws.amazon.com/blogs/database/manage-collation-changes-in-postgresql-on-amazon-aurora-and-amazon-rds/) (AWS Blog). | 

## Related resources
<a name="implement-sha1-hashing-for-pii-data-when-migrating-from-sql-server-to-postgresql-resources"></a>

**AWS Blogs**
+ [Manage collation changes in PostgreSQL on Amazon Aurora and Amazon RDS](https://aws.amazon.com/blogs/database/manage-collation-changes-in-postgresql-on-amazon-aurora-and-amazon-rds/)
+ [Migrate SQL Server to Amazon Aurora PostgreSQL using best practices and lessons learned from the field](https://aws.amazon.com/blogs/database/migrate-sql-server-to-amazon-aurora-postgresql-using-best-practices-and-lessons-learned-from-the-field/)

**Other resources**
+ [PostgreSQL pgcrypto module](https://www.postgresql.org/docs/current/pgcrypto.html) (PostgreSQL documentation)
+ [PostgreSQL trigger functions](https://www.postgresql.org/docs/current/plpgsql-trigger.html) (PostgreSQL documentation)
+ [SQL Server HASHBYTES function](https://docs.microsoft.com/en-us/sql/t-sql/functions/hashbytes-transact-sql) (Microsoft documentation)

# Incrementally migrate from Amazon RDS for Oracle to Amazon RDS for PostgreSQL using Oracle SQL Developer and AWS SCT
<a name="incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct"></a>

*Pinesh Singal, Amazon Web Services*

## Summary
<a name="incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct-summary"></a>

Many migration strategies and approaches run in multiple phases that can last from a few weeks to several months. During this time, you can experience delays because of patching or upgrades in the source Oracle DB instances that you want to migrate to PostgreSQL DB instances. To avoid this situation, we recommend that you incrementally migrate the remaining Oracle database code to PostgreSQL database code.

This pattern provides an incremental migration strategy with no downtime for a multi-terabyte Oracle DB instance that has a high number of transactions performed after your initial migration and that must be migrated to a PostgreSQL database. You can use this pattern’s step-by-step approach to incrementally migrate an Amazon Relational Database Service (Amazon RDS) for Oracle DB instance to an Amazon RDS for PostgreSQL DB instance without signing in to the Amazon Web Services (AWS) Management Console.

The pattern uses [Oracle SQL Developer](https://www.oracle.com/database/technologies/appdev/sqldeveloper-landing.html) to find the differences between two schemas in the source Oracle database. You then use AWS Schema Conversion Tool (AWS SCT) to convert the Amazon RDS for Oracle database schema objects to Amazon RDS for PostgreSQL database schema objects. You can then run a Python script in the Windows Command Prompt to create AWS SCT objects for the incremental changes to the source database objects.

**Note**  
Before you migrate your production workloads, we recommend that you run a proof of concept (PoC) for this pattern's approach in a testing or non-production environment.

## Prerequisites and limitations
<a name="incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing Amazon RDS for Oracle DB instance. 
+ An existing Amazon RDS for PostgreSQL DB instance.
+ AWS SCT, installed and configured with JDBC drivers for Oracle and PostgreSQL database engines. For more information about this, see [Installing AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html#CHAP_Installing.Procedure) and [Installing the required database drivers](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html#CHAP_Installing.JDBCDrivers) in the AWS SCT documentation. 
+ Oracle SQL Developer, installed and configured. For more information about this, see the [Oracle SQL Developer](https://www.oracle.com/database/technologies/appdev/sqldeveloper-landing.html) documentation. 
+ The `incremental-migration-sct-sql.zip` file (attached), downloaded to your local computer.

**Limitations **
+ The minimum requirements for your source Amazon RDS for Oracle DB instance are:
  + Oracle versions 10.2 and later (for versions 10.x), 11g (versions 11.2.0.3.v1 and later) and up to 12.2, and 18c for the Enterprise, Standard, Standard One, and Standard Two editions
+ The minimum requirements for your target Amazon RDS for PostgreSQL DB instance are:  
  + PostgreSQL versions 9.4 and later (for versions 9.x), 10.x, and 11.x
+ This pattern uses Oracle SQL Developer. Your results might vary if you use other tools to find and export schema differences.
+ The [SQL scripts](https://docs.oracle.com/database/121/AEUTL/sql_rep.htm#AEUTL191) generated by Oracle SQL Developer can raise transformation errors, which means that you need to perform a manual migration.
+ If the AWS SCT source and target test connections fail, make sure that you configure the JDBC driver versions and inbound rules for the virtual private cloud (VPC) security group to accept incoming traffic.

**Product versions**
+ Amazon RDS for Oracle DB instance version 12.1.0.2 (version 10.2 and later)
+ Amazon RDS for PostgreSQL DB instance version 11.5 (version 9.4 and later)
+ Oracle SQL Developer version 19.1 and later
+ AWS SCT version 1.0.632 and later

## Architecture
<a name="incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct-architecture"></a>

**Source technology stack  **
+ Amazon RDS for Oracle DB instance

**Target technology stack  **
+ Amazon RDS for PostgreSQL DB instance

**Source and target architecture**

The following diagram shows the migration of an Amazon RDS for Oracle DB instance to an Amazon RDS for PostgreSQL DB instance.

![\[Migration workflow from Amazon RDS for Oracle to Amazon RDS for PostgreSQL.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/c7eed517-e496-4e8e-a520-c1e43397419e/images/bfbbed5e-db13-4a22-99aa-1a17f00f5faf.png)


The diagram shows the following migration workflow:

1. Open Oracle SQL Developer and connect to the source and target databases.

1. Generate a [diff report ](https://docs.oracle.com/cd/E93130_01/rules_palette/Content/Diff%20Reports/Detailed_Diff_Reports.htm)and then generate the SQL scripts file for the schema difference objects. For more information about diff reports, see [Detailed diff reports](https://docs.oracle.com/cd/E93130_01/rules_palette/Content/Diff%20Reports/Detailed_Diff_Reports.htm) in the Oracle documentation.

1. Configure AWS SCT and run the Python code.

1. The SQL scripts file converts from Oracle to PostgreSQL.

1. Run the SQL scripts file on the target PostgreSQL DB instance. 

**Automation and scale**

You can automate this migration by adding additional parameters and security-related changes for multiple functionalities in a single program to your Python script.

## Tools
<a name="incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct-tools"></a>
+ [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) – AWS Schema Conversion Tool (AWS SCT) converts your existing database schema from one database engine to another.
+ [Oracle SQL Developer](https://www.oracle.com/database/technologies/appdev/sqldeveloper-landing.html) – Oracle SQL Developer is an integrated development environment (IDE) that simplifies the development and management of Oracle databases in both traditional and cloud-based deployments.

**Code **

The `incremental-migration-sct-sql.zip` file (attached) contains the complete source code for this pattern.

## Epics
<a name="incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct-epics"></a>

### Create the SQL scripts file for the source database schema differences
<a name="create-the-sql-scripts-file-for-the-source-database-schema-differences"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run Database Diff in Oracle SQL Developer.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct.html) | DBA | 
| Generate the SQL scripts file. | Choose **Generate Script** to generate the differences in the SQL files. This generates the SQL scripts file that AWS SCT uses to convert your database from Oracle to PostgreSQL. | DBA | 

### Use the Python script to create the target DB objects in AWS SCT
<a name="use-the-python-script-to-create-the-target-db-objects-in-aws-sct"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS SCT with the Windows Command Prompt.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct.html)<pre>#source_vendor,source_hostname,source_dbname,source_user,source_pwd,source_schema,source_port,source_sid,target_vendor,target_hostname,target_user,target_pwd,target_dbname,target_port<br /><br />ORACLE,myoracledb.cokmvis0v46q.us-east-1.rds.amazonaws.com,ORCL,orcl,orcl1234,orcl,1521,ORCL,POSTGRESQL,mypgdbinstance.cokmvis0v46q.us-east-1.rds.amazonaws.com,pguser,pgpassword,pgdb,5432</pre>4. Modify the AWS SCT configuration parameters according to your requirements and then copy the SQL scripts file into your working directory in the `input` subdirectory. | DBA | 
| Run the Python script.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct.html) | DBA | 
|  Create the objects in Amazon RDS for PostgreSQL | Run the SQL files and create objects in your Amazon RDS for PostgreSQL DB instance. | DBA | 

## Related resources
<a name="incrementally-migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-using-oracle-sql-developer-and-aws-sct-resources"></a>
+ [Oracle on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html) 
+ [PostgreSQL on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html)
+ [Using the AWS SCT user interface](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html)
+ [Using Oracle as a source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html)

## Attachments
<a name="attachments-c7eed517-e496-4e8e-a520-c1e43397419e"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/c7eed517-e496-4e8e-a520-c1e43397419e/attachments/attachment.zip)

# Load BLOB files into TEXT by using file encoding in Aurora PostgreSQL-Compatible
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible"></a>

*Bhanu Ganesh Gudivada and Jeevan Shetty, Amazon Web Services*

## Summary
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible-summary"></a>

Often during migration, there are cases where you have to process unstructured and structured data that is loaded from files on a local file system. The data might also be in a character set that differs from the database character set.

These files hold the following types of data:
+ **Metadata** – This data describes the file structure.
+ **Semi-structured data** – These are textual strings in a specific format, such as JSON or XML. You might be able to make assertions about such data, such as "will always start with '<' " or "does not contain any newline characters."
+ **Full text** – This data usually contains all types of characters, including newline and quote characters. It might also consist of multibyte characters in UTF-8.
+ **Binary data** – This data might contain bytes or combinations of bytes including, nulls and end-of-file markers.

Loading a mixture of these types of data can be a challenge.

The pattern can be used with on-premises Oracle databases , Oracle databases that are on Amazon Elastic Compute Cloud (Amazon EC2) instances on the Amazon Web Services (AWS) Cloud, and Amazon Relational Database Service (Amazon RDS) for Oracle databases. As an example, this pattern is using Amazon Aurora PostgreSQL-Compatible Edition.

In Oracle Database, with the help of a `BFILE` (binary file) pointer, the `DBMS_LOB` package, and Oracle system functions, you can load from file and convert to CLOB with character encoding. Because PostgreSQL does not support the BLOB data type when migrating to an Amazon Aurora PostgreSQL-Compatible Edition database, these functions must be converted to PostgreSQL-compatible scripts.

This pattern provides two approaches for loading a file into a single database column in an Amazon Aurora PostgreSQL-Compatible database:
+ Approach 1 – You import data from your Amazon Simple Storage Service (Amazon S3) bucket by using the `table_import_from_s3` function of the `aws_s3` extension with the encode option.
+ Approach 2 – You encode to hexadecimal outside of the database, and then you decode to view `TEXT` inside the database.

We recommend using Approach 1 because Aurora PostgreSQL-Compatible has direct integration with the `aws_s3` extension.

This pattern uses the example of loading a flat file that contains an email template, which has multibyte characters and distinct formatting, into an Amazon Aurora PostgreSQL-Compatible database.

## Prerequisites and limitations
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ An Amazon RDS instance or an Aurora PostgreSQL-Compatible instance
+ A basic understanding of SQL and Relational Database Management System (RDBMS)
+ An Amazon Simple Storage Service (Amazon S3) bucket.
+ Knowledge of system functions in Oracle and PostgreSQL
+ RPM Package HexDump-XXD-0.1.1 (included with Amazon Linux 2)
**Note**  
Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](http://aws.amazon.com/amazon-linux-2/faqs/).

**Limitations **
+ For the `TEXT` data type, the longest possible character string that can be stored is about 1 GB.

**Product versions**
+ Aurora supports the PostgreSQL versions listed in [Amazon Aurora PostgreSQL updates](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.html).

## Architecture
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible-architecture"></a>

**Target technology stack  **
+ Aurora PostgreSQL-Compatible

**Target architecture**

*Approach 1 – Using aws\$1s3.table\$1import\$1from\$1s3 *

From an on-premises server, a file containing an email template with multibyte characters and custom formatting is transferred to Amazon S3. The custom database function provided by this pattern uses the `aws_s3.table_import_from_s3` function with `file_encoding` to load files into the database and return query results  as the `TEXT` data type.

![\[Four-step process from the on-premises server to the TEXT output from the Aurora database.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/cbf63cac-dcea-4e18-ab4f-c4f6296f60e7/images/9c46b385-e8a0-4e50-b856-d522c44d79e3.png)


1. Files are transferred to the staging S3 bucket.

1. Files are uploaded to the Amazon Aurora PostgreSQL-Compatible database.

1. Using the pgAdmin client, the custom function `load_file_into_clob` is deployed to the Aurora database.

1. The custom function internally uses `table_import_from_s3` with file\$1encoding. The output from the function is obtained by using `array_to_string` and `array_agg` as `TEXT` output.

*Approach 2 – Encoding to hexadecimal outside of the database and decoding to view TEXT inside the database*

A file from an on-premises server or a local file system is converted into a hex dump. Then the file is imported into PostgreSQL as a `TEXT` field.

![\[Three-step process using Hex dump.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/cbf63cac-dcea-4e18-ab4f-c4f6296f60e7/images/563038ca-f890-4874-85df-d0f82d99800a.png)


1. Convert the file to a hex dump in the command line by using the `xxd -p` option.

1. Upload the hex dump files into Aurora PostgreSQL-Compatible by using the `\copy` option, and then decode the hex dump files to binary.

1. Encode the binary data to return as `TEXT`.

## Tools
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible-tools"></a>

**AWS services**
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.

**Other tools**
+ [pgAdmin4](https://www.pgadmin.org/) is an open source administration and development platform for PostgreSQL. pgAdmin4 can be used on Linux, Unix, mac OS, and Windows to manage PostgreSQL.  

## Epics
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible-epics"></a>

### Approach 1: Import data from Amazon S3 to Aurora PostgreSQL-Compatible
<a name="approach-1-import-data-from-amazon-s3-to-aurora-postgresql-compatible"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch an EC2 instance. | For instructions on launching an instance, see [Launch your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html). | DBA | 
| Install the PostgreSQL client pgAdmin tool. | Download and install [pgAdmin](https://www.pgadmin.org/download/). | DBA | 
| Create an IAM policy. | Create an AWS Identity and Access Management (IAM) policy named `aurora-s3-access-pol` that grants access to the S3 bucket where the files will be stored. Use the following code, replacing `<bucket-name>` with the name of your S3 bucket.<pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Effect": "Allow",<br />            "Action": [<br />                "s3:GetObject",<br />                "s3:AbortMultipartUpload",<br />                "s3:DeleteObject",<br />                "s3:ListMultipartUploadParts",<br />                "s3:PutObject",<br />                "s3:ListBucket"<br />            ],<br />            "Resource": [<br />                "arn:aws:s3:::<bucket-name>/*",<br />                "arn:aws:s3:::<bucket-name>"<br />            ]<br />        }<br />    ]<br />}</pre> | DBA | 
| Create an IAM role for object import from Amazon S3 to Aurora PostgreSQL-Compatible. | Use the following code to create an IAM role named `aurora-s3-import-role` with the [AssumeRole](https://docs.amazonaws.cn/en_us/STS/latest/APIReference/API_AssumeRole.html) trust relationship. `AssumeRole` allows Aurora to access other AWS services on your behalf.<pre>{<br />  "Version": "2012-10-17",		 	 	 <br />  "Statement": [<br />    {<br />      "Effect": "Allow","Principal": {<br />        "Service": "rds.amazonaws.com"<br />      },"Action": "sts:AssumeRole"<br />    }<br />  ]<br />}<br /></pre> | DBA | 
| Associate the IAM role to the cluster. | To associate the IAM role with the Aurora PostgreSQL-Compatible database cluster, run the following AWS CLI command. Change `<Account-ID>` to the ID of the AWS account that hosts the Aurora PostgreSQL-Compatible database. This enables the Aurora PostgreSQL-Compatible database to access the S3 bucket.<pre>aws rds add-role-to-db-cluster --db-cluster-identifier aurora-postgres-cl<br />--feature-name s3Import --role-arn arn:aws:iam::<Account-ID>:role/aurora-s3-import-role</pre> | DBA | 
| Upload the example to Amazon S3. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible.html) | DBA, App owner | 
| Deploy the custom function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible.html) | App owner, DBA | 
| Run the custom function for importing the data into the database. | Run the following SQL command, replacing the items in angle brackets with the appropriate values.<pre>select load_file_into_clob('aws-s3-import-test'::text,'us-west-1'::text,'employee.salary.event.notification.email.vm'::text);</pre>Replace the items in angle brackets with the appropriate values, as shown in the following example, before running the command.<pre>Select load_file_into_clob('aws-s3-import-test'::text,'us-west-1'::text,'employee.salary.event.notification.email.vm'::text);</pre>The command loads the file from Amazon S3 and returns the output as `TEXT`. | App owner, DBA | 

### Approach 2: Convert the template file into a hex dump in a local Linux system
<a name="approach-2-convert-the-template-file-into-a-hex-dump-in-a-local-linux-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Convert the template file into a hex dump. | The Hexdump utility displays the contents of binary files in hexadecimal, decimal, octal, or ASCII. The `hexdump` command is part of the `util-linux` package and comes pre-installed in Linux distributions. The Hexdump RPM package is part of Amazon Linux 2 as well. (: Amazon Linux 2 is nearing end of support. For more information, see the [Amazon Linux 2 FAQs](http://aws.amazon.com/amazon-linux-2/faqs/).)To convert the file contents into a hex dump, run the following shell command.<pre>xxd -p </path/file.vm> | tr -d '\n' > </path/file.hex></pre>Replace the path and file with the appropriate values, as shown in the following example.<pre>xxd -p employee.salary.event.notification.email.vm | tr -d '\n' > employee.salary.event.notification.email.vm.hex</pre> | DBA | 
| Load the hexdump file into the database schema. | Use the following commands to load the hexdump file into the Aurora PostgreSQL-Compatible database.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible.html) | DBA | 

## Related resources
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible-resources"></a>

**References**
+ [Using a PostgreSQL database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)
+ [Oracle Database 19c to Amazon Aurora with PostgreSQL Compatibility (12.4) Migration Playbook](https://d1.awsstatic.com/whitepapers/Migration/oracle-database-amazon-aurora-postgresql-migration-playbook-12.4.pdf)
+ [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html)
+ [Associating an IAM role with an Amazon Aurora MySQL DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.html)
+ [pgAdmin](https://www.pgadmin.org/)

**Tutorials**
+ [Getting Started with Amazon RDS](https://aws.amazon.com/rds/getting-started/)
+ [Migrate from Oracle to Amazon Aurora](https://aws.amazon.com/getting-started/projects/migrate-oracle-to-amazon-aurora/)

## Additional information
<a name="load-blob-files-into-text-by-using-file-encoding-in-aurora-postgresql-compatible-additional"></a>

**load\$1file\$1into\$1clob custom function**

```
CREATE OR REPLACE FUNCTION load_file_into_clob(
    s3_bucket_name text,
    s3_bucket_region text,
    file_name text,
    file_delimiter character DEFAULT '&'::bpchar,
    file_encoding text DEFAULT 'UTF8'::text)
    RETURNS text
    LANGUAGE 'plpgsql'
    COST 100
    VOLATILE PARALLEL UNSAFE
AS $BODY$
DECLARE
    blob_data BYTEA;
    clob_data TEXT;
    l_table_name CHARACTER VARYING(50) := 'file_upload_hex';
    l_column_name CHARACTER VARYING(50) := 'template';
    l_return_text TEXT;
    l_option_text CHARACTER VARYING(150);
    l_sql_stmt CHARACTER VARYING(500);
        
BEGIN
    
    EXECUTE format ('CREATE TEMPORARY TABLE %I (%I text, id_serial serial)', l_table_name, l_column_name);
    
    l_sql_stmt := 'select ''(format text, delimiter ''''' || file_delimiter || ''''', encoding ''''' || file_encoding ||  ''''')'' ';
    
    EXECUTE FORMAT(l_sql_stmt)
    INTO l_option_text;
    
    EXECUTE FORMAT('SELECT aws_s3.table_import_from_s3($1,$2,$6, aws_commons.create_s3_uri($3,$4,$5))')
    INTO l_return_text
    USING l_table_name, l_column_name, s3_bucket_name, file_name,s3_bucket_region,l_option_text;
    
    EXECUTE format('select array_to_string(array_agg(%I order by id_serial),E''\n'') from %I', l_column_name, l_table_name)
    INTO clob_data;
    
    drop table file_upload_hex;
    
    RETURN clob_data;
END;
$BODY$;
```

**Email template**

```
######################################################################################
##                                                                                    ##
##    johndoe Template Type: email                                                    ##
##    File: johndoe.salary.event.notification.email.vm                                ##
##    Author: Aimée Étienne    Date 1/10/2021                                                ##
##  Purpose: Email template used by EmplmanagerEJB to inform a johndoe they         ##
##        have been given access to a salary event                                    ##
##    Template Attributes:                                                             ##
##        invitedUser - PersonDetails object for the invited user                        ##
##        salaryEvent - OfferDetails object for the event the user was given access    ##
##        buyercollege - CompDetails object for the college owning the salary event    ##
##        salaryCoordinator - PersonDetails of the salary coordinator for the event    ##
##        idp - Identity Provider of the email recipient                                ##
##        httpWebRoot - HTTP address of the server                                    ##
##                                                                                    ##
######################################################################################

$!invitedUser.firstname $!invitedUser.lastname,

Ce courriel confirme que vous avez ete invite par $!salaryCoordinator.firstname $!salaryCoordinator.lastname de $buyercollege.collegeName a participer a l'evenement "$salaryEvent.offeringtitle" sur johndoeMaster Sourcing Intelligence.

Votre nom d'utilisateur est $!invitedUser.username

Veuillez suivre le lien ci-dessous pour acceder a l'evenement.

${httpWebRoot}/myDashboard.do?idp=$!{idp}

Si vous avez oublie votre mot de passe, utilisez le lien "Mot de passe oublie" situe sur l'ecran de connexion et entrez votre nom d'utilisateur ci-dessus.

Si vous avez des questions ou des preoccupations, nous vous invitons a communiquer avec le coordonnateur de l'evenement $!salaryCoordinator.firstname $!salaryCoordinator.lastname au ${salaryCoordinator.workphone}.

*******

johndoeMaster Sourcing Intelligence est une plateforme de soumission en ligne pour les equipements, les materiaux et les services.

Si vous avez des difficultes ou des questions, envoyez un courriel a support@johndoeMaster.com pour obtenir de l'aide.
```

# Migrate Amazon RDS for Oracle to Amazon RDS for PostgreSQL with AWS SCT and AWS DMS using AWS CLI and CloudFormation
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation"></a>

*Pinesh Singal, Amazon Web Services*

## Summary
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation-summary"></a>

This pattern shows how to migrate a multi-terabyte [Amazon Relational Database Service (Amazon RDS) for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html) DB instance to an [Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html) DB instance by using the AWS Command Line Interface (AWS CLI). The approach provides minimal downtime and doesn’t require signing in to the AWS Management Console.

This pattern helps avoid manual configurations and individual migrations by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) consoles. The solution sets up a one-time configuration for multiple databases and performs the migrations by using AWS SCT and AWS DMS in the AWS CLI.

The pattern uses AWS SCT to convert database schema objects from Amazon RDS for Oracle to Amazon RDS for PostgreSQL, and then uses AWS DMS to migrate the data. Using Python scripts in AWS CLI, you create AWS SCT objects and AWS DMS tasks with an CloudFormation template.

## Prerequisites and limitations
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing Amazon RDS for Oracle DB instance.
+ An existing Amazon RDS for PostgreSQL DB instance. 
+ An Amazon Elastic Compute Cloud (Amazon EC2) instance or local machine with Windows or Linux OS for running scripts.
+ An understanding of the following AWS DMS migration task types: `full-load`, `cdc`, `full-load-and-cdc`.  For more information, see [Creating a task](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html) in the AWS DMS documentation. 
+ AWS SCT, installed and configured with Java Database Connectivity (JDBC) drivers for Oracle and PostgreSQL database engines. For more information, see [Installing and configuring AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html#CHAP_Installing.Procedure) in the AWS SCT documentation. 
+ The `AWSSchemaConversionToolBatch.jar` file from the installed AWS SCT folder, copied to your working directory.
+ The `cli-sct-dms-cft.zip` file (attached), downloaded and extracted in your working directory.
+ The most recent AWS DMS replication instance engine version. For more information, see [How do I create an AWS DMS replication instance](https://aws.amazon.com/premiumsupport/knowledge-center/create-aws-dms-replication-instance/) in the AWS Support documentation and [AWS DMS release notes](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html). 
+ AWS CLI version 2, installed and configured with your access key ID, secret access key, and default AWS Region name for the EC2 instance or OS where the scripts are run. For more information, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Configuring settings for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) in the AWS CLI documentation. 
+ Familiarity with CloudFormation templates. For more information, see [How CloudFormation works](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-overview.html) in the CloudFormation documentation. 
+ Python version 3, installed and configured on the EC2 instance or OS where the scripts are run. For more information, see the [Python documentation](https://docs.python.org/3/). 

**Limitations **
+ The minimum requirements for your source Amazon RDS for Oracle DB instance are: 
  + Oracle versions 12c (12.1.0.2, 12.2.0.1), 18c (18.0.0.0), and 19c (19.0.0.0) for the Enterprise, Standard, Standard One, and Standard Two editions.
  + Although Amazon RDS supports Oracle 18c (18.0.0.0), this version is on a deprecation path because Oracle no longer provides patches for 18c after the end-of-support date. For more information, see [Amazon RDS for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html#Oracle.Concepts.Deprecate.11204) in the Amazon RDS documentation.
  + Amazon RDS for Oracle 11g is no longer supported.
+ The minimum requirements for your target Amazon RDS for PostgreSQL DB instance are: 
  + PostgreSQL versions 9 (9.5 and 9.6), 10.x, 11.x, 12.x, and 13.x

**Product versions**
+ Amazon RDS for Oracle DB instance version 12.1.0.2 and later
+ Amazon RDS for PostgreSQL DB instance version 11.5 and later
+ AWS CLI version 2 
+ The latest version of AWS SCT
+ The latest version of Python 3

## Architecture
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation-architecture"></a>

**Source technology stack  **
+ Amazon RDS for Oracle

**Target technology stack  **
+ Amazon RDS for PostgreSQL

**Source and target architecture **

The following diagram shows the migration of an Amazon RDS for Oracle DB instance to an Amazon RDS for PostgreSQL DB instance using AWS DMS and Python scripts.

![\[Migrating RDS for Oracle DB instance to RDS for PostgreSQL DB instance using AWS DMS and Python.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5e041494-2e64-4f09-b6ec-0e0cba3a4972/images/77022e13-46fb-4aa8-ab49-85b0ca4c317a.png)


 

The diagram shows the following migration workflow:

1. The Python script uses AWS SCT to connect to the source and target DB instances.

1. The user starts AWS SCT with the Python script, converts the Oracle code to PostgreSQL code, and runs it on the target DB instance.

1. The Python script creates AWS DMS replication tasks for the source and target DB instances.

1. The user deploys Python scripts to start the AWS DMS tasks and then stops the tasks after the data migration is complete.

**Automation and scale**

You can automate this migration by adding parameters and security-related changes to your Python script, to provide additional functionality. 

## Tools
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation-tools"></a>
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command line shell.
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions. This pattern converts the `.csv` input file to a `.json` input file by using a Python script. The `.json` file is used in AWS CLI commands to create an CloudFormation stack that creates multiple AWS DMS replication tasks with Amazon Resource Names (ARNs), migration types, task settings, and table mappings.
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores to the AWS Cloud or between combinations of cloud and on-premises setups. This pattern uses AWS DMS to create, start, and stop tasks with a Python script that runs on the command line, and to create the CloudFormation template.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database. This patterns requires the `AWSSchemaConversionToolBatch.jar` file from the installed AWS SCT directory.

**Code**

The `cli-sct-dms-cft.zip` file (attached) contains the complete source code for this pattern.

## Epics
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation-epics"></a>

### Configure AWS SCT and create database objects in the AWS CLI
<a name="configure-awssct-and-create-database-objects-in-the-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS SCT to run from the AWS CLI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation.html) | DBA | 
| Run the `run_aws_sct.py` Python script. | Run the `run_aws_sct.py` Python script by using the following command:`$ python run_aws_sct.py database_migration.txt`The Python script converts the database objects from Oracle to PostgreSQL and creates SQL files in PostgreSQL format. The script also creates the PDF file `Database migration assessment report`, which provides you with detailed recommendations and conversion statistics for database objects. | DBA | 
| Create objects in Amazon RDS for PostgreSQL. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation.html) | DBA | 

### Configure and create AWS DMS tasks by using the AWS CLI and CloudFormation
<a name="configure-and-create-dms-tasks-by-using-the-cli-and-cfn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS DMS replication instance. | Sign in to the AWS Management Console, open the [AWS DMS console](https://console.aws.amazon.com/dms/v2/), and create a replication instance that is configured according to your requirements.For more information, see [Creating a replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Creating.html) in the AWS DMS documentation and [How do I create an AWS DMS replication instance](https://aws.amazon.com/premiumsupport/knowledge-center/create-aws-dms-replication-instance/) in the AWS Support documentation. | DBA | 
| Create the source endpoint. | On the AWS DMS console, choose **Endpoints** and then create a source endpoint for the Oracle database according to your requirements. The extra connection attribute must be `numberDataTypeScale` with a `-2` value.For more information, see [Creating source and target endpoints](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.Creating.html) in the AWS DMS documentation. | DBA | 
| Create the target endpoint. | On the AWS DMS console, choose **Endpoints** and then create a target endpoint for the PostgreSQL database according to your requirements.  For more information, see [Creating source and target endpoints](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.Creating.html) in the AWS DMS documentation. | DevOps engineer | 
| Configure the AWS DMS replication details to run from the AWS CLI. | Configure the AWS DMS source and target endpoints and replication details in the `dms-arn-list.txt` file with the source endpoint ARN, target endpoint ARN, and the replication instance ARN by using the following format:<pre>#sourceARN,targetARN,repARN<br />arn:aws:dms:us-east-1:123456789012:endpoint:EH7AINRUDZ5GOYIY6HVMXECMCQ<br />arn:aws:dms:us-east-1:123456789012:endpoint:HHJVUV57N7O3CQF4PJZKGIOYY5<br />arn:aws:dms:us-east-1:123456789012:rep:LL57N77AQQAHHJF4PJFHNEDZ5G</pre> | DBA | 
| Run the `dms-create-task.py` Python script to create the AWS DMS tasks. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation.html) | DBA | 
| Verify that AWS DMS tasks are ready. | On the AWS DMS console, check that your AWS DMS tasks are in `Ready` status in the **Status **section. | DBA | 

### Start and stop the AWS DMS tasks by using the AWS CLI
<a name="start-and-stop-the-dms-tasks-by-using-the-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start the AWS DMS tasks. | Run the `dms-start-task.py` Python script by using the following command:<pre>$ python dms-start-task.py start '<cdc-start-datetime>'</pre>The start date and time must be in the `'DD-MON-YYYY'` or `'YYYY-MM-DDTHH:MI:SS'` format (for example, `'01-Dec-2019'` or `'2018-03-08T12:12:12'`).You can review the AWS DMS task status in the **Table statistics** tab on the **Tasks **page of the AWS DMS console. | DBA | 
| Validate the data. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation.html)For more information, see [AWS DMS data validation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html) in the AWS DMS documentation. | DBA | 
| Stop the AWS DMS tasks. | Run the Python script by using the following command:<pre>$ python dms-start-task.py stop</pre>AWS DMS tasks might stop with a `failed`status, depending on the validation status. For more information, see the next section. | DBA | 

## Troubleshooting
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| AWS SCT source and target test connections fail. | Configure the JDBC driver versions and VPC security group inbound rules to accept the incoming traffic. | 
| Source or target endpoint test run fails. | Check if the endpoint settings and replication instance are in `Available` status. Check if the endpoint connection status is `Successful`. For more information, see [How do I troubleshoot AWS DMS endpoint connectivity failures](https://aws.amazon.com/premiumsupport/knowledge-center/dms-endpoint-connectivity-failures/) in the AWS Support documentation. | 
| Full-load run fails. | Check if the source and target databases have matching data types and sizes. For more information, see [Troubleshooting migration tasks in AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting.html) in the AWS DMS documentation. | 
| You encounter validation run errors. | Check if the table has a primary key because non-primary key tables are not validated.If the table has a primary key and errors, check that the extra connection attribute in the source endpoint has `numberDataTypeScale=-2`.For more information, see [Endpoint settings when using Oracle as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.ConnectionAttrib), [OracleSettings](https://docs.aws.amazon.com/dms/latest/APIReference/API_OracleSettings.html), and [Troubleshooting](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html#CHAP_Validating.Troubleshooting) in the AWS DMS documentation. | 

## Related resources
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-with-aws-sct-and-aws-dms-using-aws-cli-and-aws-cloudformation-resources"></a>
+ [Installing and configuring AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html#CHAP_Installing.Procedure)
+ [Introduction to AWS DMS](https://www.youtube.com/watch?v=ouia1Sc5QGo) (video)
+ [Examples of CloudFormation stack operation commands for the AWS CLI and PowerShell](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cli.html)
+ [Navigating the user interface of the AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html)
+ [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ [Connecting to Oracle databases with AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html)
+ [Using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html) 
+ [Sources for data migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html)
+ [Targets for data migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.html)
+ [cloudformation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudformation/index.html) (AWS CLI documentation)
+ [create-stack](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudformation/create-stack.html) (AWS CLI documentation) 
+ [dms](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/index.html) (AWS CLI documentation) 

## Attachments
<a name="attachments-5e041494-2e64-4f09-b6ec-0e0cba3a4972"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/5e041494-2e64-4f09-b6ec-0e0cba3a4972/attachments/attachment.zip)

# Migrate Amazon RDS for Oracle to Amazon RDS for PostgreSQL in SSL mode by using AWS DMS
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms"></a>

*Pinesh Singal, Amazon Web Services*

## Summary
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-summary"></a>

This pattern provides guidance for migrating an Amazon Relational Database Service (Amazon RDS) for Oracle database instance to an Amazon RDS for PostgreSQL database on the Amazon Web Services (AWS) Cloud. To encrypt connections between the databases, the pattern uses certificate authority (CA) and SSL mode in Amazon RDS and AWS Database Migration Service (AWS DMS).

The pattern describes an online migration strategy with little or no downtime for a multi-terabyte Oracle source database with a high number of transactions. For data security, the pattern uses SSL when transferring the data.

This pattern uses AWS Schema Conversion Tool (AWS SCT) to convert the Amazon RDS for Oracle database schema to an Amazon RDS for PostgreSQL schema. Then the pattern uses AWS DMS to migrate data from the Amazon RDS for Oracle database to the Amazon RDS for PostgreSQL database.

## Prerequisites and limitations
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-prereqs"></a>

**Prerequisites **
+ An active AWS account 
+ Amazon RDS database certificate authority (CA) configured with ***rds-ca-rsa2048-g1*** only 
  + The ***rds-ca-2019*** certificate expired in August 2024.
  + The ***rds-ca-2015*** certificate expired on March 5, 2020.
+ AWS SCT
+ AWS DMS
+ pgAdmin
+ SQL tools (for example, SQL Developer or SQL\$1Plus)

**Limitations **
+ Amazon RDS for Oracle database – The minimum requirement is for Oracle versions 19c for the Enterprise and Standard Two editions.
+ Amazon RDS for PostgreSQL database – The minimum requirement is for PostgreSQL version 12 and later (for versions 9.x and later).

**Product versions**
+ Amazon RDS for Oracle database version 12.1.0.2 instance
+ Amazon RDS for PostgreSQL database version 11.5 instance

## Architecture
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-architecture"></a>

**Source technology stack  **
+ An Amazon RDS for Oracle database instance with version 12.1.0.2.v18.

**Target technology stack  **
+ AWS DMS
+ An Amazon RDS for PostgreSQL database instance with version 11.5.

**Target architecture**

The following diagram shows the architecture for data migration architecture between Oracle (source) and PostgreSQL (target) databases. The architecture includes the following:
+ A virtual private cloud (VPC)
+ An Availability Zone
+ A private subnet
+ An Amazon RDS for Oracle database
+ An AWS DMS replication instance
+ An RDS for PostgreSQL database

To encrypt connections for source and target databases, CA and SSL mode must be enabled in Amazon RDS and AWS DMS.

![\[Data moving between RDS for Oracle and AWS DMS, and between AWS DMS and RDS for PostgreSQL.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7098e2a3-b456-4e14-8881-c97145aef483/images/55b50ff7-1e6a-4ff0-9bcd-2fd419d5316a.png)


## Tools
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-tools"></a>

**AWS services**
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.
+ [Amazon Relational Database Service (Amazon RDS) for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html) helps you set up, operate, and scale an Oracle relational database in the AWS Cloud.
+ [Amazon Relational Database Service (Amazon RDS) for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html) helps you set up, operate, and scale a PostgreSQL relational database in the AWS Cloud.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database.

**Other services**
+ [pgAdmin](https://www.pgadmin.org/) is an open source management tool for PostgreSQL. It provides a graphical interface that helps you create, maintain, and use database objects.

## Best practices
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-best-practices"></a>

Amazon RDS provides new CA certificates as an AWS security best practice. For information about the new certificates and the supported AWS Regions, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html).

If your RDS instance is currently on CA certificate `rds-ca-2019`, and you want to upgrade to `rds-ca-rsa2048-g1`, follow the instructions in [Updating your CA certificate by modifying your DB instance or cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html#UsingWithRDS.SSL-certificate-rotation-updating) or [Updating your CA certificate by applying maintenance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html#UsingWithRDS.SSL-certificate-rotation-maintenance-update).

## Epics
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-epics"></a>

### Configure the Amazon RDS for Oracle instance
<a name="configure-the-amazon-rds-for-oracle-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Oracle database instance. | Sign in to your AWS account, open the AWS Management Console, and navigate to the Amazon RDS console. On the console, choose **Create database**, and then choose **Oracle**. | General AWS, DBA | 
| Configure security groups. | Configure inbound and outbound security groups. | General AWS | 
| Create an option group. | Create an option group in the same VPC and security group as the Amazon RDS for Oracle database. For **Option**, choose **SSL**. For **Port**, choose **2484** (for SSL connections). | General AWS | 
| Configure the option settings. | Use the following settings:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Modify the RDS for Oracle DB instance. | Set the CA certificate as **rds-ca-rsa2048-g1**. Under **Option group**, attach the previously created option group. | DBA, General AWS | 
| Confirm that the RDS for Oracle DB instance is available. | Make sure that the Amazon RDS for Oracle database instance is up and running and that the database schema is accessible.To connect to the RDS for Oracle DB, use the `sqlplus` command from the command line.<pre>$ sqlplus orcl/****@myoracledb.cokmvis0v46q.us-east-1.rds.amazonaws.com:1521/ORCL<br />SQL*Plus: Release 12.1.0.2.0 Production on Tue Oct 15 18:11:07 2019<br />Copyright (c) 1982, 2016, Oracle.  All rights reserved.<br />Last Successful login time: Mon Dec 16 2019 23:17:31 +05:30<br />Connected to:<br />Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production<br />With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options<br />SQL></pre> | DBA | 
| Create objects and data in the RDS for Oracle database. | Create objects and insert data in the schema. | DBA | 

### Configure the Amazon RDS for PostgreSQL instance
<a name="configure-the-amazon-rds-for-postgresql-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the RDS for PostgreSQL database. | On the Amazon RDS console **Create database** page, choose **PostgreSQL** to create an Amazon RDS for PostgreSQL database instance. | DBA, General AWS | 
| Configure security groups. | Configure inbound and outbound security groups. | General AWS | 
| Create a parameter group. | If you are using PostgreSQL version 11.x, create a parameter group to set SSL parameters. In PostgreSQL version 12, the SSL parameter group is enabled by default. | General AWS | 
| Edit parameters. | Change the `rds.force_ssl` parameter to `1` (on).By default, the `ssl` parameter is `1` (on). By setting the `rds.force_ssl` parameter to `1`, you force all connections to connect through SSL mode only. | General AWS | 
| Modify the RDS for PostgreSQL DB instance. | Set the CA certificate as **rds-ca-rsa2048-g1**. Attach the default parameter group or the previously created parameter group, depending on your PostgreSQL version. | DBA, General AWS | 
| Confirm that the RDS for PostgreSQL DB instance is available. | Make sure that the Amazon RDS for PostgreSQL database is up and running.The `psql` command establishes an SSL connection with `sslmode` set from the command line.One option is to set `sslmode=1` in the parameter group and use a `psql` connection without including the `sslmode` parameter in the command.The following output shows that the SSL connection is established.<pre>$ psql -h mypgdbinstance.cokmvis0v46q.us-east-1.rds.amazonaws.com -p 5432 "dbname=pgdb user=pguser"<br />Password for user pguser:<br />psql (11.3, server 11.5)<br />SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)<br />Type "help" for help.<br />pgdb=></pre>A second option is to set `sslmode=1` in the parameter group and to include the `sslmode` parameter in the `psql` command.The following output shows that the SSL connection is established.<pre>$ psql -h mypgdbinstance.cokmvis0v46q.us-east-1.rds.amazonaws.com -p 5432 "dbname=pgdb user=pguser sslmode=require"<br />Password for user pguser: <br />psql (11.3, server 11.5)<br />SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)<br />Type "help" for help.<br />pgdb=></pre> | DBA | 

### Configure and run AWS SCT
<a name="configure-and-run-aws-sct"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install AWS SCT. | Install the latest version of the AWS SCT application. | General AWS | 
| Configure AWS SCT with JDBC drivers. | Download the Java Database Connectivity (JDBC) drivers for Oracle ([ojdbc8.jar](https://download.oracle.com/otn-pub/otn_software/jdbc/233/ojdbc8.jar)) and PostgreSQL ([postgresql-42.2.5.jar](https://jdbc.postgresql.org/download/postgresql-42.2.19.jar)).To configure the drivers in AWS SCT, choose **Settings**, **Global settings**, **Drivers**. | General AWS | 
| Create the AWS SCT project. | Create the AWS SCT project and report, using Oracle as the source DB engine and Amazon RDS for PostgreSQL as the target DB engine:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Validate database objects. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | DBA, General AWS | 

### Configure and run AWS DMS
<a name="configure-and-run-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a replication instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Import the certificate. | Download the [certificate bundle (PEM)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.CertificatesAllRegions) for your AWS Region.The bundle contains both the `rds-ca-2019` intermediate and root certificates. The bundle also contains the `rds-ca-rsa2048-g1`, `rds-ca-rsa4096-g1`, and `rds-ca-ecc384-g1` root CA certificates. Your application trust store needs to register only the root CA certificate. | General AWS | 
| Create the source endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html)For more information, see [Using an Oracle database as a source for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html). | General AWS | 
| Create the target endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html)For more information, see [Using a PostgreSQL database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html). | General AWS | 
| Test the endpoints. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Create migration tasks. | To create a migration task for full load and change data capture (CDC) or for data validation, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Plan the production run. | Confirm downtime with stakeholders such as application owners to run AWS DMS in production systems. | Migration lead | 
| Run the migration task. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Validate the data. | Review migration task results and data in the source Oracle and target PostgreSQL databases:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | DBA | 
| Stop the migration task. | After you successfully complete the data validation, stop the migration task. | General AWS | 

### Clean up the resources
<a name="clean-up-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the AWS DMS tasks. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Delete the AWS DMS endpoints. | Select the source and target endpoints that you created, choose **Actions**, and choose **Delete**. | General AWS | 
| Delete the AWS DMS replication instance. | Choose the replication instance, choose **Actions**, and then choose **Delete**. | General AWS | 
| Delete the PostgreSQL database. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | General AWS | 
| Delete the Oracle database. | On the Amazon RDS console, select the Oracle database instance, choose **Actions**, and then choose **Delete**. | General AWS | 

## Troubleshooting
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| AWS SCT source and target test connections are failing. | Configure JDBC driver versions and VPC security group inbound rules to accept the incoming traffic. | 
| The Oracle source endpoint test run fails. | Check the endpoint settings and whether the replication instance is available. | 
| The AWS DMS task full-load run fails. | Check whether the source and target databases have matching data types and sizes. | 
| The AWS DMS validation migration task returns errors. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms.html) | 

## Related resources
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-resources"></a>

**Databases**
+ [Amazon RDS for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html) 
+ [Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html)

**SSL DB connection**
+ [Using SSL/TLS to encrypt a connection to a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html)
  + [Using SSL with an RDS for Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Concepts.SSL.html)
  + [Securing connections to RDS for PostgreSQL with SSL/TLS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Concepts.General.Security.html)
  + [Download certificate bundles for specific AWS Regions](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.CertificatesAllRegions)
    + [Download CA-2019 root certificate](https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem) (expired in August 2024)
+ [Working with option groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithOptionGroups.html)
  + [Adding options to Oracle DB instances](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.html)
  + [Oracle Secure Sockets Layer](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.SSL.html)
+ [Working with parameter groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html)
+ [PostgreSQL sslmode connection parameter](https://www.postgresql.org/docs/11/libpq-connect.html#LIBPQ-CONNECT-SSLMODE)
+ [Using SSL from JDBC](https://jdbc.postgresql.org/documentation/ssl/)
+ [Rotating your SSL/TLS certificate](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html)
  + [Updating your CA certificate by modifying your DB instance or cluster](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html#UsingWithRDS.SSL-certificate-rotation-updating)
  + [Updating your CA certificate by applying maintenance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html#UsingWithRDS.SSL-certificate-rotation-maintenance-update)

**AWS SCT**
+ [AWS Schema Conversion Tool](https://aws.amazon.com/dms/schema-conversion-tool/)
+ [AWS Schema Conversion Tool User Guide](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
+ [Using the AWS SCT user interface](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html)
+ [Using Oracle Database as a source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html)

**AWS DMS**
+ [AWS Database Migration Service](https://aws.amazon.com/dms/)
+ [AWS Database Migration Service User Guide](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
  + [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
  + [Using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)
+ [Using SSL with AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.SSL.html)
+ [Migrating Applications Running Relational Databases to AWS](https://d1.awsstatic.com/whitepapers/Migration/migrating-applications-to-aws.pdf)

## Additional information
<a name="migrate-amazon-rds-for-oracle-to-amazon-rds-for-postgresql-in-ssl-mode-by-using-aws-dms-additional"></a>

Amazon RDS Certificate Authority certificates `rds-ca-2019` expired in August 2024. If you use or plan to use SSL or TLS with certificate verification to connect to your RDS DB instances or Multi-AZ DB clusters, consider using one of the new CA certificates: `rds-ca-rsa2048-g1`, `rds-ca-rsa4096-g1`, or `rds-ca-ecc384-g1`.

# Migrate Oracle SERIALLY\$1REUSABLE pragma packages into PostgreSQL
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql"></a>

*Vinay Paladi, Amazon Web Services*

## Summary
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql-summary"></a>

This pattern provides a step-by-step approach for migrating Oracle packages that are defined as SERIALLY\$1REUSABLE pragma to PostgreSQL on Amazon Web Services (AWS). This approach maintains the functionality of the SERIALLY\$1REUSABLE pragma.

PostgreSQL doesn’t support the concept of packages and the SERIALLY\$1REUSABLE pragma. To get similar functionality in PostgreSQL, you can create schemas for packages and deploy all the related objects (such as functions, procedures, and types) inside the schemas. To achieve the functionality of the SERIALLY\$1REUSABLE pragma, the example wrapper function script that’s provided in this pattern uses an [AWS Schema Conversion Tool (AWS SCT) extension pack](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_ExtensionPack.html).

For more information, see [SERIALLY\$1REUSABLE Pragma](https://docs.oracle.com/cd/B13789_01/appdev.101/b10807/13_elems046.htm) in the Oracle documentation.

## Prerequisites and limitations
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ The latest version of AWS SCT and the required drivers
+ An Amazon Aurora PostgreSQL-Compatible Edition database or an Amazon Relational Database Service (Amazon RDS) for PostgreSQL database 

**Product versions**
+ Oracle Database version 10g and later

## Architecture
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql-architecture"></a>

**Source technology stack**
+ Oracle Database on premises

**Target technology stack**
+ [Aurora PostgreSQL-Compatible](https://aws.amazon.com/rds/aurora/details/postgresql-details/) or Amazon RDS for PostgreSQL
+ AWS SCT

**Migration architecture**

![\[On-premises Oracle DB data going to AWS using AWS SCT, .sql files, manual conversion, to PostgreSQL.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/fe3c45d2-6ea4-43b5-adb1-18f068f126b9/images/2dc90708-e300-4251-9d12-de97b6588b72.png)


## Tools
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql-tools"></a>

**AWS services**
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database.
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) is a fully managed, ACID-compliant relational database engine that helps you set up, operate, and scale PostgreSQL deployments.
+ [Amazon Relational Database Service (Amazon RDS) for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html) helps you set up, operate, and scale a PostgreSQL relational database in the AWS Cloud.

**Other tools**
+ [pgAdmin](https://www.pgadmin.org/) is an open-source management tool for PostgreSQL. It provides a graphical interface that helps you create, maintain, and use database objects.

## Epics
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql-epics"></a>

### Migrate the Oracle package by using AWS SCT
<a name="migrate-the-oracle-package-by-using-aws-sct"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up AWS SCT. | Configure AWS SCT connectivity to the source database. For more information, see [Using Oracle Database as a source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html). | DBA, Developer | 
| Convert the script. | Use AWS SCT to convert the Oracle package by selecting the target database as Aurora PostgreSQL-Compatible. | DBA, Developer | 
| Save the .sql files. | Before you save the .sql file, modify the **Project Settings** option in AWS SCT to **Single file per stage**. AWS SCT will separate the .sql file into multiple .sql files based on object type. | DBA, Developer | 
| Change the code. | Open the `init` function generated by AWS SCT, and change it as shown in the example in the *Additional information* section. It will add a variable to achieve the functionality `pg_serialize = 0`. | DBA, Developer | 
| Test the conversion. | Deploy the `init` function to the Aurora PostgreSQL-Compatible database, and test the results. | DBA, Developer | 

## Related resources
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql-resources"></a>
+ [AWS Schema Conversion Tool](https://aws.amazon.com/dms/schema-conversion-tool/)
+ [Amazon RDS](https://aws.amazon.com/rds/)
+ [Amazon Aurora features](https://aws.amazon.com/rds/aurora/postgresql-features/)
+ [SERIALLY\$1REUSABLE Pragma](https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/seriallyreusable_pragma.htm#LNPLS01346)

## Additional information
<a name="migrate-oracle-serially-reusable-pragma-packages-into-postgresql-additional"></a>

```
Source Oracle Code:

CREATE OR REPLACE PACKAGE test_pkg_var
IS
PRAGMA SERIALLY_REUSABLE;
PROCEDURE function_1
 (test_id number);
PROCEDURE function_2
 (test_id number
 );
END;

CREATE OR REPLACE PACKAGE BODY test_pkg_var
IS
PRAGMA SERIALLY_REUSABLE;
v_char VARCHAR2(20) := 'shared.airline';
v_num number := 123;

PROCEDURE function_1(test_id number)
IS
begin
dbms_output.put_line( 'v_char-'|| v_char);
dbms_output.put_line( 'v_num-'||v_num);
v_char:='test1';
function_2(0);
END;

PROCEDURE function_2(test_id number)
is
begin
dbms_output.put_line( 'v_char-'|| v_char);
dbms_output.put_line( 'v_num-'||v_num);
END;
END test_pkg_var;

Calling the above functions

set serveroutput on


EXEC test_pkg_var.function_1(1);


EXEC test_pkg_var.function_2(1);


Target Postgresql Code:


CREATE SCHEMA test_pkg_var;

CREATE OR REPLACE FUNCTION test_pkg_var.init(pg_serialize IN INTEGER DEFAULT 0)

RETURNS void
AS
$BODY$

DECLARE

BEGIN

if aws_oracle_ext.is_package_initialized( 'test_pkg_var' ) AND pg_serialize = 0

then

return;

end if;

PERFORM aws_oracle_ext.set_package_initialized( 'test_pkg_var' );

PERFORM aws_oracle_ext.set_package_variable( 'test_pkg_var', 'v_char', 'shared.airline.basecurrency'::CHARACTER

VARYING(100));

PERFORM aws_oracle_ext.set_package_variable('test_pkg_var', 'v_num', 123::integer);

END;

$BODY$

LANGUAGE plpgsql;


CREATE OR REPLACE FUNCTION test_pkg_var.function_1(pg_serialize int default 1)

RETURNS void
AS

$BODY$
DECLARE

BEGIN

PERFORM test_pkg_var.init(pg_serialize);

raise notice 'v_char%',aws_oracle_ext.get_package_variable( 'test_pkg_var', 'v_char');

raise notice 'v_num%',aws_oracle_ext.get_package_variable( 'test_pkg_var', 'v_num');

PERFORM aws_oracle_ext.set_package_variable( 'test_pkg_var', 'v_char', 'test1'::varchar);

PERFORM test_pkg_var.function_2(0);
END;

$BODY$
LANGUAGE plpgsql;


CREATE OR REPLACE FUNCTION test_pkg_var.function_2(IN pg_serialize integer default 1)

RETURNS void

AS

$BODY$

DECLARE

BEGIN

PERFORM test_pkg_var.init(pg_serialize);

raise notice 'v_char%',aws_oracle_ext.get_package_variable( 'test_pkg_var', 'v_char');

raise notice 'v_num%',aws_oracle_ext.get_package_variable( 'test_pkg_var', 'v_num');

END;
$BODY$
LANGUAGE plpgsql;


Calling the above functions

select test_pkg_var.function_1()

 select test_pkg_var.function_2()
```

# Migrate Oracle external tables to Amazon Aurora PostgreSQL-Compatible
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible"></a>

*anuradha chintha and Rakesh Raghav, Amazon Web Services*

## Summary
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible-summary"></a>

External tables give Oracle the ability to query data that is stored outside the database in flat files. You can use the ORACLE\$1LOADER driver to access any data stored in any format that can be loaded by the SQL\$1Loader utility. You can't use Data Manipulation Language (DML) on external tables, but you can use external tables for query, join, and sort operations.

Amazon Aurora PostgreSQL-Compatible Edition doesn't provide functionality similar to external tables in Oracle. Instead, you must use modernization to develop a scalable solution that meets functional requirements and is frugal.

This pattern provides steps for migrating different types of Oracle external tables to Aurora PostgreSQL-Compatible Edition on the Amazon Web Services (AWS) Cloud by using the `aws_s3` extension.

We recommend thoroughly testing this solution before implementing it in a production environment.

## Prerequisites and limitations
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Command Line Interface (AWS CLI)
+ An available Aurora PostgreSQL-Compatible database instance.
+ An on-premises Oracle database with an external table
+ pg.Client API
+ Data files 

**Limitations **
+ This pattern doesn't provide the functionality to act as a replacement for Oracle external tables. However, the steps and sample code can be enhanced further to achieve your database modernization goals.
+ Files should not contain the character that is passing as a delimiter in `aws_s3` export and import functions.

**Product versions**
+ To import from Amazon S3 into RDS for PostgreSQL the database must be running PostgreSQL version 10.7 or later.

## Architecture
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible-architecture"></a>

**Source technology stack  **
+ Oracle

**Source architecture **

![\[Diagram of data files going to a directory and table in the on-premises Oracle database.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/555e69af-36fc-4ff5-b66c-af22b4cf262a/images/3fbc507d-b0fa-4e05-b999-043dc7327ed7.png)


**Target technology stack **
+ Amazon Aurora PostgreSQL-Compatible
+ Amazon CloudWatch
+ AWS Lambda
+ AWS Secrets Manager
+ Amazon Simple Notification Service (Amazon SNS)
+ Amazon Simple Storage Service (Amazon S3)

**Target architecture **

The following diagram shows a high-level representation of the solution.

![\[The description is after the diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/555e69af-36fc-4ff5-b66c-af22b4cf262a/images/5421540e-d2e3-4361-89cc-d8415fcb21fd.png)


1. Files are uploaded to the S3 bucket.

1. The Lambda function is initiated.

1. The Lambda function initiates the DB function call.

1. Secrets Manager provides the credentials for database access.

1. Depending on the DB function, an SNS alarm is created.

**Automation and scale**

Any additions or changes to the external tables can be handled with metadata maintenance.

## Tools
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible-tools"></a>
+ [Amazon Aurora PostgreSQL-Compatible](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) – Amazon Aurora PostgreSQL-Compatible Edition is a fully managed, PostgreSQL-compatible, and ACID-compliant relational database engine that combines the speed and reliability of high-end commercial databases with the cost-effectiveness of open-source databases.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With only one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) – Amazon CloudWatch monitors Amazon S3 resources and utilization.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a serverless compute service that supports running code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. In this pattern, Lambda runs the database function whenever a file is uploaded to Amazon S3.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) – AWS Secrets Manager is a service for credential storage and retrieval. Using Secrets Manager, you can replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) provides a storage layer to receive and store files for consumption and transmission to and from the Aurora PostgreSQL-Compatible cluster.
+ [aws\$1s3](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html#aws_s3.table_import_from_s3) – The `aws_s3` extension integrates Amazon S3 and Aurora PostgreSQL-Compatible.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) coordinates and manages the delivery or sending of messages between publishers and clients. In this pattern, Amazon SNS is used to send notifications.

**Code **

Whenever a file is placed in the S3 bucket, a DB function must be created and called from the processing application or the Lambda function. For details, see the code (attached).

## Epics
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible-epics"></a>

### Create an external file
<a name="create-an-external-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add an external file to the source database. | Create an external file, and move it to the `oracle` directory. | DBA | 

### Configure the target (Aurora PostgreSQL-Compatible)
<a name="configure-the-target-aurora-postgresql-compatible"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Aurora PostgreSQL database. | Create a DB instance in your Amazon Aurora PostgreSQL-Compatible cluster. | DBA | 
| Create a schema, aws\$1s3 extension, and tables. | Use the code under `ext_tbl_scripts` in the *Additional information* section. The tables include actual tables, staging tables, error and log tables, and a metatable. | DBA, Developer | 
| Create the DB function. | To create the DB function, use the code under  `load_external_table_latest` function in the *Additional information* section. | DBA, Developer | 

### Create and configure the Lambda function
<a name="create-and-configure-the-lambda-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a role. | Create a role with permissions to access Amazon S3 and Amazon Relational Database Service (Amazon RDS). This role will be assigned to Lambda for running the pattern. | DBA | 
| Create the Lambda function. | Create a Lambda function that reads the file name from Amazon S3 (for example, `file_key = info.get('object', {}).get('key')`) and calls the DB function (for example, `curs.callproc("load_external_tables", [file_key])`) with the file name as the input parameter.Depending on the function call result, an SNS notification will be initiated (for example, `client.publish(TopicArn='arn:',Message='fileloadsuccess',Subject='fileloadsuccess')`).Based on your business needs, you can create a Lambda function with extra code if required. For more information, see the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html). | DBA | 
| Configure an S3 bucket event trigger. | Configure a mechanism to call the Lambda function for all object creation events in the S3 bucket. | DBA | 
| Create a secret. | Create a secret name for the database credentials by using Secrets Manager. Pass the secret in the Lambda function. | DBA | 
| Upload the Lambda supporting files. | Upload a .zip file that contains the Lambda support packages and the attached Python script for connecting to Aurora PostgreSQL-Compatible. The Python code calls the function that you created in the database. | DBA | 
| Create an SNS topic. | Create an SNS topic to send mail for the success or failure of the data load. | DBA | 

### Add integration with Amazon S3
<a name="add-integration-with-amazon-s3"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | On the Amazon S3 console, create an S3 bucket with a unique name that does not contain leading slashes. An S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. | DBA | 
| Create IAM policies. | To create the AWS Identity and Access Management (IAM) policies, use the code under `s3bucketpolicy_for_import` in the *Additional information* section. | DBA | 
| Create roles. | Create two roles for Aurora PostgreSQL-Compatible, one role for Import and one role for Export. Assign the corresponding policies to the roles. | DBA | 
| Attach the roles to the Aurora PostgreSQL-Compatible cluster. | Under **Manage roles**, attach the Import and Export roles to the Aurora PostgreSQL cluster. | DBA | 
| Create supporting objects for Aurora PostgreSQL-Compatible. | For the table scripts, use the code under `ext_tbl_scripts` in the *Additional information* section.For the custom function, use the code under `load_external_Table_latest` in the *Additional information* section. | DBA | 

### Process a test file
<a name="process-a-test-file"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload a file into the S3 bucket. | To upload a test file into the S3 bucket, use the console or the following command in AWS CLI. <pre>aws s3 cp /Users/Desktop/ukpost/exttbl/"testing files"/aps s3://s3importtest/inputext/aps</pre>As soon as the file is uploaded, a bucket event initiates the Lambda function, which runs the Aurora PostgreSQL-Compatible function. | DBA | 
| Check the data and the log and error files. | The Aurora PostgreSQL-Compatible function loads the files into the main table, and it creates `.log` and `.bad` files in the S3 bucket. | DBA | 
| Monitor the solution. | In the Amazon CloudWatch console, monitor the Lambda function. | DBA | 

## Related resources
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible-resources"></a>
+ [Amazon S3 integration](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html)
+ [Amazon S3](https://aws.amazon.com/s3/)
+ [Working with Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html)
+ [AWS Lambda](https://aws.amazon.com/lambda/)
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/)
+ [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/)
+ [Setting up Amazon SNS notifications](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html)

## Additional information
<a name="migrate-oracle-external-tables-to-amazon-aurora-postgresql-compatible-additional"></a>

**ext\$1table\$1scripts**

```
CREATE EXTENSION aws_s3 CASCADE;
CREATE TABLE IF NOT EXISTS meta_EXTERNAL_TABLE
(
    table_name_stg character varying(100) ,
    table_name character varying(100)  ,
    col_list character varying(1000)  ,
    data_type character varying(100)  ,
    col_order numeric,
    start_pos numeric,
    end_pos numeric,
    no_position character varying(100)  ,
    date_mask character varying(100)  ,
    delimeter character(1)  ,
    directory character varying(100)  ,
    file_name character varying(100)  ,
    header_exist character varying(5)
);
CREATE TABLE IF NOT EXISTS ext_tbl_stg
(
    col1 text
);
CREATE TABLE IF NOT EXISTS error_table
(
    error_details text,
    file_name character varying(100),
    processed_time timestamp without time zone
);
CREATE TABLE IF NOT EXISTS log_table
(
    file_name character varying(50) COLLATE pg_catalog."default",
    processed_date timestamp without time zone,
    tot_rec_count numeric,
    proc_rec_count numeric,
    error_rec_count numeric
);
sample insert scripts of meta data:
INSERT INTO meta_EXTERNAL_TABLE (table_name_stg, table_name, col_list, data_type, col_order, start_pos, end_pos, no_position, date_mask, delimeter, directory, file_name, header_exist) VALUES ('F_EX_APS_TRANSACTIONS_STG', 'F_EX_APS_TRANSACTIONS', 'source_filename', 'character varying', 2, 8, 27, NULL, NULL, NULL, 'databasedev', 'externalinterface/loaddir/APS', 'NO');
INSERT INTO meta_EXTERNAL_TABLE (table_name_stg, table_name, col_list, data_type, col_order, start_pos, end_pos, no_position, date_mask, delimeter, directory, file_name, header_exist) VALUES ('F_EX_APS_TRANSACTIONS_STG', 'F_EX_APS_TRANSACTIONS', 'record_type_identifier', 'character varying', 3, 28, 30, NULL, NULL, NULL, 'databasedev', 'externalinterface/loaddir/APS', 'NO');
INSERT INTO meta_EXTERNAL_TABLE (table_name_stg, table_name, col_list, data_type, col_order, start_pos, end_pos, no_position, date_mask, delimeter, directory, file_name, header_exist) VALUES ('F_EX_APS_TRANSACTIONS_STG', 'F_EX_APS_TRANSACTIONS', 'fad_code', 'numeric', 4, 31, 36, NULL, NULL, NULL, 'databasedev', 'externalinterface/loaddir/APS', 'NO');
INSERT INTO meta_EXTERNAL_TABLE (table_name_stg, table_name, col_list, data_type, col_order, start_pos, end_pos, no_position, date_mask, delimeter, directory, file_name, header_exist) VALUES ('F_EX_APS_TRANSACTIONS_STG', 'F_EX_APS_TRANSACTIONS', 'session_sequence_number', 'numeric', 5, 37, 42, NULL, NULL, NULL, 'databasedev', 'externalinterface/loaddir/APS', 'NO');
INSERT INTO meta_EXTERNAL_TABLE (table_name_stg, table_name, col_list, data_type, col_order, start_pos, end_pos, no_position, date_mask, delimeter, directory, file_name, header_exist) VALUES ('F_EX_APS_TRANSACTIONS_STG', 'F_EX_APS_TRANSACTIONS', 'transaction_sequence_number', 'numeric', 6, 43, 48, NULL, NULL, NULL, 'databasedev', 'externalinterface/loaddir/APS', 'NO');
```

**s3bucketpolicy\$1for import**

```
---Import role policy
--Create an IAM policy to allow, Get,  and list actions on S3 bucket
 {
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "s3import",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::s3importtest",
                "arn:aws:s3:::s3importtest/*"
            ]
        }
    ]
}
--Export Role policy
--Create an IAM policy to allow, put,  and list actions on S3 bucket
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "s3export",
            "Action": [
                "S3:PutObject",
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::s3importtest/*"
            ]
        }
    ]
}
```

**Sample DB function load\$1external\$1tables\$1latest**

```
CREATE OR REPLACE FUNCTION public.load_external_tables(pi_filename text)
 RETURNS character varying
 LANGUAGE plpgsql
AS $function$
/* Loading data from S3 bucket into a APG table */
DECLARE
 v_final_sql TEXT;
 pi_ext_table TEXT;
 r refCURSOR;
 v_sqlerrm text;
 v_chunk numeric;
 i integer;
 v_col_list TEXT;
 v_postion_list CHARACTER VARYING(1000);
 v_len  integer;
 v_delim varchar;
 v_file_name CHARACTER VARYING(1000);
 v_directory CHARACTER VARYING(1000);
 v_table_name_stg CHARACTER VARYING(1000);
 v_sql_col TEXT;
 v_sql TEXT;
 v_sql1 TEXT;
 v_sql2 TEXT;
 v_sql3 TEXT;
 v_cnt integer;
 v_sql_dynamic TEXT;
 v_sql_ins TEXT;
 proc_rec_COUNT integer;
 error_rec_COUNT integer;
 tot_rec_COUNT integer;
 v_rec_val integer;
 rec record;
 v_col_cnt integer;
 kv record;
 v_val text;
 v_header text;
 j integer;
 ERCODE VARCHAR(5);
 v_region text;
 cr CURSOR FOR
 SELECT distinct DELIMETER,
   FILE_NAME,
   DIRECTORY
 FROM  meta_EXTERNAL_TABLE
 WHERE table_name = pi_ext_table
   AND DELIMETER IS NOT NULL;


 cr1 CURSOR FOR
   SELECT   col_list,
   data_type,
   start_pos,
   END_pos,
   concat_ws('',' ',TABLE_NAME_STG) as TABLE_NAME_STG,
   no_position,date_mask
 FROM  meta_EXTERNAL_TABLE
 WHERE table_name = pi_ext_table
 order by col_order asc;
cr2 cursor FOR
SELECT  distinct table_name,table_name_stg
   FROM  meta_EXTERNAL_TABLE
   WHERE upper(file_name) = upper(pi_filename);


BEGIN
 -- PERFORM utl_file_utility.init();
   v_region := 'us-east-1';
   /* find tab details from file name */


   --DELETE FROM  ERROR_TABLE WHERE file_name= pi_filename;
  -- DELETE FROM  log_table WHERE file_name= pi_filename;


 BEGIN


   SELECT distinct table_name,table_name_stg INTO strict pi_ext_table,v_table_name_stg
   FROM  meta_EXTERNAL_TABLE
   WHERE upper(file_name) = upper(pi_filename);
 EXCEPTION
   WHEN NO_DATA_FOUND THEN
    raise notice 'error 1,%',sqlerrm;
    pi_ext_table := null;
    v_table_name_stg := null;
      RAISE USING errcode = 'NTFIP' ;
    when others then
        raise notice 'error others,%',sqlerrm;
 END;
 j :=1 ;
  
for rec in  cr2
 LOOP




  pi_ext_table     := rec.table_name;
  v_table_name_stg := rec.table_name_stg;
  v_col_list := null;


 IF pi_ext_table IS NOT NULL
  THEN
    --EXECUTE concat_ws('','truncate table  ' ,pi_ext_table) ;
   EXECUTE concat_ws('','truncate table  ' ,v_table_name_stg) ;




       SELECT distinct DELIMETER INTO STRICT v_delim
       FROM  meta_EXTERNAL_TABLE
       WHERE table_name = pi_ext_table;


       IF v_delim IS NOT NULL THEN
     SELECT distinct DELIMETER,
       FILE_NAME,
       DIRECTORY ,
       concat_ws('',' ',table_name_stg),
       case  header_exist when 'YES' then 'CSV HEADER' else 'CSV' end as header_exist
     INTO STRICT v_delim,v_file_name,v_directory,v_table_name_stg,v_header
     FROM  meta_EXTERNAL_TABLE
     WHERE table_name = pi_ext_table
       AND DELIMETER IS NOT NULL;


     IF    upper(v_delim) = 'CSV'
     THEN
       v_sql := concat_ws('','SELECT aws_s3.table_import_FROM_s3 ( ''',
       v_table_name_stg,''','''',
       ''DELIMITER '''','''' CSV HEADER QUOTE ''''"'''''', aws_commons.create_s3_uri ( ''',
       v_directory,''',''',v_file_name,''', ''',v_region,'''))');
       ELSE
       v_sql := concat_ws('','SELECT aws_s3.table_import_FROM_s3(''',
           v_table_name_stg, ''','''', ''DELIMITER AS ''''^''''',''',','
          aws_commons.create_s3_uri
           ( ''',v_directory, ''',''',
           v_file_name, ''',',
            '''',v_region,''')
          )');
          raise notice 'v_sql , %',v_sql;
       begin
        EXECUTE  v_sql;
       EXCEPTION
         WHEN OTHERS THEN
           raise notice 'error 1';
         RAISE USING errcode = 'S3IMP' ;
       END;


       select count(col_list) INTO v_col_cnt
       from  meta_EXTERNAL_TABLE where table_name = pi_ext_table;






        -- raise notice 'v_sql 2, %',concat_ws('','update ',v_table_name_stg, ' set col1 = col1||''',v_delim,'''');


       execute concat_ws('','update ',v_table_name_stg, ' set col1 = col1||''',v_delim,'''');




       i :=1;
       FOR rec in cr1
       loop
       v_sql1 := concat_ws('',v_sql1,'split_part(col1,''',v_delim,''',', i,')',' as ',rec.col_list,',');
       v_sql2 := concat_ws('',v_sql2,rec.col_list,',');
   --    v_sql3 := concat_ws('',v_sql3,'rec.',rec.col_list,'::',rec.data_type,',');


       case
         WHEN upper(rec.data_type) = 'NUMERIC'
         THEN v_sql3 := concat_ws('',v_sql3,' case WHEN length(trim(split_part(col1,''',v_delim,''',', i,'))) =0
                THEN null
                 ELSE
                 coalesce((trim(split_part(col1,''',v_delim,''',', i,')))::NUMERIC,0)::',rec.data_type,' END as ',rec.col_list,',') ;
         WHEN UPPER(rec.data_type) = 'TIMESTAMP WITHOUT TIME ZONE' AND rec.date_mask = 'YYYYMMDD'
         THEN v_sql3 := concat_ws('',v_sql3,' case WHEN length(trim(split_part(col1,''',v_delim,''',', i,'))) =0
                THEN null
                 ELSE
                 to_date(coalesce((trim(split_part(col1,''',v_delim,''',', i,'))),''99990101''),''YYYYMMDD'')::',rec.data_type,' END as ',rec.col_list,',');
         WHEN UPPER(rec.data_type) = 'TIMESTAMP WITHOUT TIME ZONE' AND rec.date_mask =  'MM/DD/YYYY hh24:mi:ss'
         THEN v_sql3 := concat_ws('',v_sql3,' case WHEN length(trim(split_part(col1,''',v_delim,''',', i,'))) =0
                THEN null
                 ELSE
                 to_date(coalesce((trim(split_part(col1,''',v_delim,''',', i,'))),''01/01/9999 0024:00:00''),''MM/DD/YYYY hh24:mi:ss'')::',rec.data_type,' END as ',rec.col_list,',');
          ELSE
        v_sql3 := concat_ws('',v_sql3,' case WHEN length(trim(split_part(col1,''',v_delim,''',', i,'))) =0
                THEN null
                 ELSE
                  coalesce((trim(split_part(col1,''',v_delim,''',', i,'))),'''')::',rec.data_type,' END as ',rec.col_list,',') ;
       END case;


       i :=i+1;
       end loop;


         -- raise notice 'v_sql 3, %',v_sql3;


       SELECT trim(trailing ' ' FROM v_sql1) INTO v_sql1;
       SELECT trim(trailing ',' FROM v_sql1) INTO v_sql1;


       SELECT trim(trailing ' ' FROM v_sql2) INTO v_sql2;
       SELECT trim(trailing ',' FROM v_sql2) INTO v_sql2;


       SELECT trim(trailing ' ' FROM v_sql3) INTO v_sql3;
       SELECT trim(trailing ',' FROM v_sql3) INTO v_sql3;


       END IF;
      raise notice 'v_delim , %',v_delim;


     EXECUTE concat_ws('','SELECT COUNT(*) FROM ',v_table_name_stg)  INTO v_cnt;


    raise notice 'stg cnt , %',v_cnt;


    /* if upper(v_delim) = 'CSV' then
       v_sql_ins := concat_ws('', ' SELECT * from ' ,v_table_name_stg );
     else
      -- v_sql_ins := concat_ws('',' SELECT ',v_sql1,'  from (select col1 from ' ,v_table_name_stg , ')sub ');
       v_sql_ins := concat_ws('',' SELECT ',v_sql3,'  from (select col1 from ' ,v_table_name_stg , ')sub ');
       END IF;*/


v_chunk := v_cnt/100;




for i in 1..101
loop
     BEGIN
    -- raise notice 'v_sql , %',v_sql;
       -- raise notice 'Chunk number , %',i;
       v_sql_ins := concat_ws('',' SELECT ',v_sql3,'  from (select col1 from ' ,v_table_name_stg , ' offset ',v_chunk*(i-1), ' limit ',v_chunk,') sub ');


     v_sql := concat_ws('','insert into  ', pi_ext_table ,' ', v_sql_ins);
     -- raise notice 'select statement , %',v_sql_ins;
          -- v_sql := null;
     -- EXECUTE concat_ws('','insert into  ', pi_ext_table ,' ', v_sql_ins, 'offset ',v_chunk*(i-1), ' limit ',v_chunk );
     --v_sql := concat_ws('','insert into  ', pi_ext_table ,' ', v_sql_ins );


     -- raise notice 'insert statement , %',v_sql;


    raise NOTICE 'CHUNK START %',v_chunk*(i-1);
   raise NOTICE 'CHUNK END %',v_chunk;


     EXECUTE v_sql;


  EXCEPTION
       WHEN OTHERS THEN
       -- v_sql_ins := concat_ws('',' SELECT ',v_sql1, '  from (select col1 from ' ,v_table_name_stg , ' )sub ');
         -- raise notice 'Chunk number for cursor , %',i;


    raise NOTICE 'Cursor - CHUNK START %',v_chunk*(i-1);
   raise NOTICE 'Cursor -  CHUNK END %',v_chunk;
         v_sql_ins := concat_ws('',' SELECT ',v_sql3, '  from (select col1 from ' ,v_table_name_stg , ' )sub ');


         v_final_sql := REPLACE (v_sql_ins, ''''::text, ''''''::text);
        -- raise notice 'v_final_sql %',v_final_sql;
         v_sql :=concat_ws('','do $a$ declare  r refcursor;v_sql text; i numeric;v_conname text;  v_typ  ',pi_ext_table,'[]; v_rec  ','record',';
           begin






           open r for execute ''select col1 from ',v_table_name_stg ,'  offset ',v_chunk*(i-1), ' limit ',v_chunk,''';
           loop
           begin
           fetch r into v_rec;
           EXIT WHEN NOT FOUND;




           v_sql := concat_ws('''',''insert into  ',pi_ext_table,' SELECT ',REPLACE (v_sql3, ''''::text, ''''''::text) , '  from ( select '''''',v_rec.col1,'''''' as col1) v'');
            execute v_sql;


           exception
            when others then
          v_sql := ''INSERT INTO  ERROR_TABLE VALUES (concat_ws('''''''',''''Error Name: '''',$$''||SQLERRM||''$$,''''Error State: '''',''''''||SQLSTATE||'''''',''''record : '''',$$''||v_rec.col1||''$$),'''''||pi_filename||''''',now())'';


               execute v_sql;
             continue;
           end ;
           end loop;
           close r;
           exception
           when others then
         raise;
           end ; $a$');
      -- raise notice ' inside excp v_sql %',v_sql;
          execute v_sql;
      --  raise notice 'v_sql %',v_sql;
       END;
  END LOOP;
     ELSE


     SELECT distinct DELIMETER,FILE_NAME,DIRECTORY ,concat_ws('',' ',table_name_stg),
       case  header_exist when 'YES' then 'CSV HEADER' else 'CSV' end as header_exist
       INTO STRICT v_delim,v_file_name,v_directory,v_table_name_stg,v_header
     FROM  meta_EXTERNAL_TABLE
     WHERE table_name = pi_ext_table                  ;
     v_sql := concat_ws('','SELECT aws_s3.table_import_FROM_s3(''',
       v_table_name_stg, ''','''', ''DELIMITER AS ''''#'''' ',v_header,' '',','
      aws_commons.create_s3_uri
       ( ''',v_directory, ''',''',
       v_file_name, ''',',
        '''',v_region,''')
      )');
         EXECUTE  v_sql;


     FOR rec in cr1
     LOOP


      IF rec.start_pos IS NULL AND rec.END_pos IS NULL AND rec.no_position = 'recnum'
      THEN
        v_rec_val := 1;
      ELSE


       case
         WHEN upper(rec.data_type) = 'NUMERIC'
         THEN v_sql1 := concat_ws('',' case WHEN length(trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1))) =0
                THEN null
                 ELSE
                 coalesce((trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1)))::NUMERIC,0)::',rec.data_type,' END as ',rec.col_list,',') ;
         WHEN UPPER(rec.data_type) = 'TIMESTAMP WITHOUT TIME ZONE' AND rec.date_mask = 'YYYYMMDD'
         THEN v_sql1 := concat_ws('','case WHEN length(trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1))) =0
                THEN null
                 ELSE
                 to_date(coalesce((trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1))),''99990101''),''YYYYMMDD'')::',rec.data_type,' END as ',rec.col_list,',');
         WHEN UPPER(rec.data_type) = 'TIMESTAMP WITHOUT TIME ZONE' AND rec.date_mask = 'YYYYMMDDHH24MISS'
         THEN v_sql1 := concat_ws('','case WHEN length(trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1))) =0
                THEN null
                 ELSE
                 to_date(coalesce((trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1))),''9999010100240000''),''YYYYMMDDHH24MISS'')::',rec.data_type,' END as ',rec.col_list,',');
          ELSE
        v_sql1 := concat_ws('',' case WHEN length(trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1))) =0
                THEN null
                 ELSE
                  coalesce((trim(substring(COL1, ',rec.start_pos ,',', rec.END_pos,'-',rec.start_pos ,'+1))),'''')::',rec.data_type,' END as ',rec.col_list,',') ;
       END case;


      END IF;
      v_col_list := concat_ws('',v_col_list ,v_sql1);
     END LOOP;




           SELECT trim(trailing ' ' FROM v_col_list) INTO v_col_list;
           SELECT trim(trailing ',' FROM v_col_list) INTO v_col_list;


           v_sql_col   :=  concat_ws('',trim(trailing ',' FROM v_col_list) , ' FROM  ',v_table_name_stg,' WHERE col1 IS NOT NULL AND length(col1)>0 ');




           v_sql_dynamic := v_sql_col;


           EXECUTE  concat_ws('','SELECT COUNT(*) FROM ',v_table_name_stg) INTO v_cnt;




         IF v_rec_val = 1 THEN
             v_sql_ins := concat_ws('',' select row_number() over(order by ctid) as line_number ,' ,v_sql_dynamic) ;


         ELSE
               v_sql_ins := concat_ws('',' SELECT' ,v_sql_dynamic) ;
           END IF;


     BEGIN
       EXECUTE concat_ws('','insert into  ', pi_ext_table ,' ', v_sql_ins);
           EXCEPTION
              WHEN OTHERS THEN
          IF v_rec_val = 1 THEN
                  v_final_sql := ' select row_number() over(order by ctid) as line_number ,col1 from ';
                ELSE
                 v_final_sql := ' SELECT col1 from';
               END IF;
       v_sql :=concat_ws('','do $a$ declare  r refcursor;v_rec_val numeric := ',coalesce(v_rec_val,0),';line_number numeric; col1 text; v_typ  ',pi_ext_table,'[]; v_rec  ',pi_ext_table,';
             begin
             open r for execute ''',v_final_sql, ' ',v_table_name_stg,' WHERE col1 IS NOT NULL AND length(col1)>0 '' ;
             loop
             begin
             if   v_rec_val = 1 then
             fetch r into line_number,col1;
             else
             fetch r into col1;
             end if;


             EXIT WHEN NOT FOUND;
              if v_rec_val = 1 then
              select line_number,',trim(trailing ',' FROM v_col_list) ,' into v_rec;
              else
                select ',trim(trailing ',' FROM v_col_list) ,' into v_rec;
              end if;


             insert into  ',pi_ext_table,' select v_rec.*;
              exception
              when others then
               INSERT INTO  ERROR_TABLE VALUES (concat_ws('''',''Error Name: '',SQLERRM,''Error State: '',SQLSTATE,''record : '',v_rec),''',pi_filename,''',now());
               continue;
              end ;
               end loop;
             close r;
              exception
              when others then
              raise;
              end ; $a$');
         execute v_sql;


     END;


         END IF;


   EXECUTE concat_ws('','SELECT COUNT(*) FROM  ' ,pi_ext_table)   INTO proc_rec_COUNT;


   EXECUTE concat_ws('','SELECT COUNT(*) FROM  error_table WHERE file_name =''',pi_filename,''' and processed_time::date = clock_timestamp()::date')  INTO error_rec_COUNT;


   EXECUTE concat_ws('','SELECT COUNT(*) FROM ',v_table_name_stg)   INTO tot_rec_COUNT;


   INSERT INTO  log_table values(pi_filename,now(),tot_rec_COUNT,proc_rec_COUNT, error_rec_COUNT);


   raise notice 'v_directory, %',v_directory;


   raise notice 'pi_filename, %',pi_filename;


   raise notice 'v_region, %',v_region;


  perform aws_s3.query_export_to_s3('SELECT replace(trim(substring(error_details,position(''('' in error_details)+1),'')''),'','','';''),file_name,processed_time FROM  error_table WHERE file_name = '''||pi_filename||'''',
   aws_commons.create_s3_uri(v_directory, pi_filename||'.bad', v_region),
   options :='FORmat csv, header, delimiter $$,$$'
   );


raise notice 'v_directory, %',v_directory;


   raise notice 'pi_filename, %',pi_filename;


   raise notice 'v_region, %',v_region;


  perform aws_s3.query_export_to_s3('SELECT * FROM  log_table WHERE file_name = '''||pi_filename||'''',
   aws_commons.create_s3_uri(v_directory, pi_filename||'.log', v_region),
   options :='FORmat csv, header, delimiter $$,$$'
   );




   END IF;
 j := j+1;
 END LOOP;


       RETURN 'OK';
EXCEPTION
    WHEN  OTHERS THEN
  raise notice 'error %',sqlerrm;
   ERCODE=SQLSTATE;
   IF ERCODE = 'NTFIP' THEN
     v_sqlerrm := concat_Ws('',sqlerrm,'No data for the filename');
   ELSIF ERCODE = 'S3IMP' THEN
    v_sqlerrm := concat_Ws('',sqlerrm,'Error While exporting the file from S3');
   ELSE
      v_sqlerrm := sqlerrm;
   END IF;


 select distinct directory into v_directory from  meta_EXTERNAL_TABLE;




 raise notice 'exc v_directory, %',v_directory;


   raise notice 'exc pi_filename, %',pi_filename;


   raise notice 'exc v_region, %',v_region;


  perform aws_s3.query_export_to_s3('SELECT * FROM  error_table WHERE file_name = '''||pi_filename||'''',
   aws_commons.create_s3_uri(v_directory, pi_filename||'.bad', v_region),
   options :='FORmat csv, header, delimiter $$,$$'
   );
    RETURN null;
END;
$function$
```

# Migrate function-based indexes from Oracle to PostgreSQL
<a name="migrate-function-based-indexes-from-oracle-to-postgresql"></a>

*Veeranjaneyulu Grandhi and Navakanth Talluri, Amazon Web Services*

## Summary
<a name="migrate-function-based-indexes-from-oracle-to-postgresql-summary"></a>

Indexes are a common way to enhance database performance. An index allows the database server to find and retrieve specific rows much faster than it could without an index. But indexes also add overhead to the database system as a whole, so they should be used sensibly. Function-based indexes, which are based on a function or expression, can involve multiple columns and mathematical expressions. A function-based index improves the performance of queries that use the index expression. 

Natively, PostgreSQL doesn't support creating function-based indexes using functions that have volatility defined as stable. However, you can create similar functions with volatility as `IMMUTABLE` and use them in index creation.

An `IMMUTABLE` function cannot modify the database, and it’s guaranteed to return the same results given the same arguments forever. This category allows the optimizer to pre-evaluate the function when a query calls it with constant arguments. 

This pattern helps in migrating the Oracle function-based indexes when used with functions such as `to_char`, `to_date`, and `to_number` to the PostgreSQL equivalent.

## Prerequisites and limitations
<a name="migrate-function-based-indexes-from-oracle-to-postgresql-prereqs"></a>

**Prerequisites **
+ An active Amazon Web Services (AWS) account
+ A source Oracle database instance with the listener service set up and running
+ Familiarity with PostgreSQL databases

**Limitations**
+ Database size limit is 64 TB.
+ Functions used in index creation must be IMMUTABLE.

**Product versions**
+ All Oracle database editions for versions 11g (versions 11.2.0.3.v1 and later) and up to 12.2, and 18c
+ PostgreSQL versions 9.6 and later

## Architecture
<a name="migrate-function-based-indexes-from-oracle-to-postgresql-architecture"></a>

**Source technology stack**
+ An Oracle database on premises or on an Amazon Elastic Compute Cloud (Amazon EC2) instance, or an Amazon RDS for Oracle DB instance

**Target technology stack**
+ Any PostgreSQL engine

## Tools
<a name="migrate-function-based-indexes-from-oracle-to-postgresql-tools"></a>
+ **pgAdmin 4** is an open source management tool for Postgres. The pgAdmin 4 tool provides a graphical interface for creating, maintaining, and using database objects.
+ **Oracle SQL Developer** is an integrated development environment (IDE) for developing and managing Oracle Database in both traditional and cloud deployments.

## Epics
<a name="migrate-function-based-indexes-from-oracle-to-postgresql-epics"></a>

### Create a function-based index using a default function
<a name="create-a-function-based-index-using-a-default-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a function-based index on a column using the to\$1char function. | Use the following code to create the function-based index.<pre>postgres=# create table funcindex( col1 timestamp without time zone);<br />CREATE TABLE<br />postgres=# insert into funcindex values (now());<br />INSERT 0 1<br />postgres=# select * from funcindex;<br />            col1<br />----------------------------<br /> 2022-08-09 16:00:57.77414<br />(1 rows)<br /> <br />postgres=# create index funcindex_idx on funcindex(to_char(col1,'DD-MM-YYYY HH24:MI:SS'));<br />ERROR:  functions in index expression must be marked IMMUTABLE</pre> PostgreSQL doesn’t allow creating a function-based index without the `IMMUTABLE` clause. | DBA, App developer | 
| Check the volatility of the function. | To check the function volatility, use the code in the *Additional information* section.   | DBA | 

### Create function-based indexes using a wrapper function
<a name="create-function-based-indexes-using-a-wrapper-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a wrapper function. | To create a wrapper function, use the code in the *Additional information section*. | PostgreSQL developer | 
| Create an index by using the wrapper function. | Use the code in the *Additional information* section to create a user-defined function with the keyword `IMMUTABLE` in the same schema as the application, and refer to it in the index-creation script.If a user-defined function is created in a common schema (from the previous example), update the `search_path` as shown.<pre>ALTER ROLE <ROLENAME> set search_path=$user, COMMON;</pre> | DBA, PostgreSQL developer | 

### Validate index creation
<a name="validate-index-creation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate index creation. | Validate that the index needs to be created, based on query access patterns. | DBA | 
| Validate that the index can be used. | To check whether the function-based index is picked up by the PostgreSQL Optimizer, run an SQL statement using explain or explain analyze. Use the code in the *Additional information* section. If possible, gather the table statistics as well.If you notice the explain plan, PostgreSQL optimizer has chosen a function-based index because of the predicate condition. | DBA | 

## Related resources
<a name="migrate-function-based-indexes-from-oracle-to-postgresql-resources"></a>
+ [Function-based indexes](https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_indexes.htm#ADFNS00505) (Oracle documentation)
+ [Indexes on Expressions](https://www.postgresql.org/docs/9.4/indexes-expressional.html) (PostgreSQL documentation)
+ [PostgreSQL volatility](https://www.postgresql.org/docs/current/xfunc-volatility.html) (PostgreSQL documentation)
+ [PostgreSQL search\$1path](https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH) (PostgreSQL documentation)
+ [Oracle Database 19c to Amazon Aurora PostgreSQL Migration Playbook](https://docs.aws.amazon.com/dms/latest/oracle-to-aurora-postgresql-migration-playbook/chap-oracle-aurora-pg.html) 

## Additional information
<a name="migrate-function-based-indexes-from-oracle-to-postgresql-additional"></a>

**Create a wrapper function**

```
CREATE OR REPLACE FUNCTION myschema.to_char(var1 timestamp without time zone, var2 varchar) RETURNS varchar AS $BODY$ select to_char(var1, 'YYYYMMDD'); $BODY$ LANGUAGE sql IMMUTABLE;
```

**Create an index by using the wrapper function**

```
postgres=# create function common.to_char(var1 timestamp without time zone, var2 varchar) RETURNS varchar AS $BODY$ select to_char(var1, 'YYYYMMDD'); $BODY$ LANGUAGE sql IMMUTABLE;
CREATE FUNCTION
postgres=# create index funcindex_idx on funcindex(common.to_char(col1,'DD-MM-YYYY HH24:MI:SS'));
CREATE INDEX
```

**Check the volatility of the function**

```
SELECT DISTINCT p.proname as "Name",p.provolatile as "volatility" FROM pg_catalog.pg_proc p
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
 LEFT JOIN pg_catalog.pg_language l ON l.oid = p.prolang
 WHERE n.nspname OPERATOR(pg_catalog.~) '^(pg_catalog)$' COLLATE pg_catalog.default AND p.proname='to_char'GROUP BY p.proname,p.provolatile
ORDER BY 1;
```

**Validate that the index can be used**

```
explain analyze <SQL>
 
 
postgres=# explain select col1 from funcindex where common.to_char(col1,'DD-MM-YYYY HH24:MI:SS') = '09-08-2022 16:00:57';
                                                       QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
 Index Scan using funcindex_idx on funcindex  (cost=0.42..8.44 rows=1 width=8)
   Index Cond: ((common.to_char(col1, 'DD-MM-YYYY HH24:MI:SS'::character varying))::text = '09-08-2022 16:00:57'::text)
(2 rows)
```

# Migrate Oracle native functions to PostgreSQL using extensions
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions"></a>

*Pinesh Singal, Amazon Web Services*

## Summary
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions-summary"></a>

This migration pattern provides step-by-step guidance for migrating an Amazon Relational Database Service (Amazon RDS) for Oracle database instance to an Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition database by modifying the `aws_oracle_ext` and `orafce` extensions to PostgreSQL (`psql`) native built-in code. This will save processing time.

The pattern describes an offline manual migration strategy with no downtime for a multi-terabyte Oracle source database with a high number of transactions.

The migration process uses AWS Schema Conversion Tool (AWS SCT) with the `aws_oracle_ext` and `orafce` extensions to convert an Amazon RDS for Oracle database schema to an Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible database schema. Then the code is manually changed to PostgreSQL supported native `psql` built-in code. This is because the extension calls impact code processing on the PostgreSQL database server, and not all the extension code is fully complaint or compatible with PostgreSQL code.

This pattern primarily focuses on manually migrating SQL codes using AWS SCT and the extensions `aws_oracle_ext` and `orafce`. You convert the already used extensions into native PostgreSQL (`psql`) built-ins. Then you remove all references to the extensions and convert the codes accordingly.

## Prerequisites and limitations
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions-prereqs"></a>

**Prerequisites **
+ An active AWS account 
+ Operating system (Windows or Mac) or Amazon EC2 instance (up and running) 
+ Orafce

**Limitations **

Not all Oracle functions using `aws_oracle_ext` or `orafce` extensions can be converted to native PostgreSQL functions. It might need manual rework so as to compile it with PostgreSQL libraries.

One drawback of using AWS SCT extensions is its slow performance in running and fetching the results. Its cost can be understood from simple [PostgreSQL EXPLAIN plan](https://www.postgresql.org/docs/current/sql-explain.html) (execution plan of a statement) on the Oracle `SYSDATE` function migration to the PostgreSQL `NOW()` function between all three codes (`aws_oracle_ext`, `orafce`, and `psql` default), as explained in the *Performance comparison check* section in the attached document.

**Product versions**
+ **Source: **Amazon RDS for Oracle database 10.2 and later (for 10.x), 11g (11.2.0.3.v1 and later) and up to 12.2, 18c, and 19c (and later) for Enterprise Edition, Standard Edition, Standard Edition 1, and Standard Edition 2
+ **Target**: Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible database 9.4 and later (for 9.x), 10.x, 11.x, 12.x, 13.x, and 14.x (and later)
+ **AWS SCT**: Latest version (this pattern was tested with 1.0.632)
+ **Orafce**: Latest version (this pattern was tested with 3.9.0)

## Architecture
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions-architecture"></a>

**Source technology stack  **
+ An Amazon RDS for Oracle database instance with version 12.1.0.2.v18

**Target technology stack  **
+ An Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible database instance with version 11.5

**Database migration architecture**

The following diagram represents the database migration architecture between the source Oracle and target PostgreSQL databases. The architecture involves AWS Cloud, a virtual private cloud (VPC), Availability Zones, a private subnet, an Amazon RDS for Oracle database, AWS SCT, an Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible database, extensions for Oracle (`aws_oracle_ext` and `orafce`), and structured query language (SQL) files.

![\[The process is explained in the following list.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/158847bb-27ef-4915-a9ca-7d87073792c1/images/234b824a-bfe5-4ef0-9fa7-8401370b92a5.png)


1. Launch Amazon RDS for Oracle DB instance (source DB).

1. Use AWS SCT with the `aws_oracle_ext` and `orafce` extension packs to convert the source code from Oracle to PostreSQL.

1. The conversion produces PostgreSQL-supported migrated .sql files.

1. Manually convert the non-converted Oracle extension codes to PostgreSQL (`psql`) codes.

1. The manual conversion produces PostgreSQL-supported converted .sql files.

1. Run these .sql files on your Amazon RDS for PostgreSQL DB instance (target DB).

## Tools
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions-tools"></a>

**Tools**

*AWS services*
+ [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) - AWS Schema Conversion Tool (AWS SCT) converts your existing database schema from one database engine to another. You can convert relational Online Transactional Processing (OLTP) schema, or data warehouse schema. Your converted schema is suitable for an Amazon RDS for MySQL DB instance, an Amazon Aurora DB cluster, an Amazon RDS for PostgreSQL DB instance, or an Amazon Redshift cluster. The converted schema can also be used with a database on an Amazon EC2 instance or stored as data in an Amazon S3 bucket.

  AWS SCT provides a project-based user interface to automatically convert the database schema of your source database into a format compatible with your target Amazon RDS instance. 

  You can use AWS SCT to do migration from an Oracle source database to any of the targets listed preceding. Using AWS SCT, you can export the source database object definitions such as schema, views, stored procedures, and functions. 

  You can use AWS SCT to convert data from Oracle to Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition. 

  In this pattern, you use AWS SCT to convert and migrate Oracle code into PostgreSQL using the extensions `aws_oracle_ext` and `orafce`, and manually migrating the extension codes into `psql` default or native built-in code.
+ The [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_ExtensionPack.html) extension pack is an add-on module that emulates functions present in the source database that are required when converting objects to the target database. Before you can install the AWS SCT extension pack, you need to convert your database schema.

  When you convert your database or data warehouse schema, AWS SCT adds an additional schema to your target database. This schema implements SQL system functions of the source database that are required when writing your converted schema to your target database. This additional schema is called the extension pack schema.

  The extension pack schema for OLTP databases is named according to the source database. For Oracle databases, the extension pack schema is `AWS_ORACLE_EXT`.

*Other tools*
+ [Orafce](https://github.com/orafce/orafce) – Orafce is a module that implements Oracle compatible functions, data types, and packages. It’s an open-source tool with a Berkeley Source Distribution (BSD) license so that anyone can use it. The `orafce` module is useful for migrating from Oracle to PostgreSQL because it has many Oracle functions implemented in PostgreSQL.

 

**Code**

For a list of all commonly used and migrated code from Oracle to PostgreSQL to avoid AWS SCT extension code usage, see the attached document.

## Epics
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions-epics"></a>

### Configure the Amazon RDS for Oracle source database
<a name="configure-the-amazon-rds-for-oracle-source-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Oracle database instance. | Create an Amazon RDS for Oracle or Aurora PostgreSQL-Compatible database instance from the Amazon RDS console. | General AWS, DBA | 
| Configure the security groups. | Configure inbound and outbound security groups. | General AWS | 
| Create the database. | Create the Oracle database with needed users and schemas. | General AWS, DBA | 
| Create the objects. | Create objects and insert data in schema. | DBA | 

### Configure the Amazon RDS for PostgreSQL target database
<a name="configure-the-amazon-rds-for-postgresql-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the PostgreSQL database instance. | Create an Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL database instance from the Amazon RDS console. | General AWS, DBA | 
| Configure the security groups. | Configure inbound and outbound security groups. | General AWS | 
| Create the database. | Create the PostgreSQL database with needed users and schemas. | General AWS, DBA | 
| Validate the extensions. | Make sure that `aws_oracle_ext` and `orafce` are installed and configured correctly in the PostgreSQL database. | DBA | 
| Verify that the PostgreSQL database is available. | Make sure that the PostgreSQL database is up and running. | DBA | 

### Migrate the Oracle schema into PostgreSQL using AWS SCT and the extensions
<a name="migrate-the-oracle-schema-into-postgresql-using-aws-sct-and-the-extensions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install AWS SCT. | Install the latest version of AWS SCT. | DBA | 
| Configure AWS SCT. | Configure AWS SCT with Java Database Connectivity (JDBC) drivers for Oracle (`ojdbc8.jar`) and PostgreSQL (`postgresql-42.2.5.jar`). | DBA | 
| Enable the AWS SCT extension pack or template. | Under AWS SCT **Project Settings**, enable built-in function implementation with the `aws_oracle_ext` and `orafce` extensions for the Oracle database schema. | DBA | 
| Convert the schema. | In AWS SCT, choose **Convert Schema** to convert the schema from Oracle to PostgreSQL and generate the .sql files. | DBA | 

### Convert AWS SCT extension code to psql code
<a name="convert-aws-sct-extension-code-to-psql-code"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Manually convert the code. | Manually convert each line of extension-supported code into `psql` default built-in code, as detailed in the attached document. For example, change `AWS_ORACLE_EXT.SYSDATE()` or `ORACLE.SYSDATE()` to `NOW()`. | DBA | 
| Validate the code | (Optional) Validate each line of code by temporary running it in the PostgreSQL database. | DBA | 
| Create objects in the PostgreSQL database. | To create objects in the PostgreSQL database, run the .sql files that were generated by AWS SCT and modified in the previous two steps. | DBA | 

## Related resources
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions-resources"></a>
+ Database
  + [Oracle on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html)
  + [PostgreSQL on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html)
  + [Working with Amazon Aurora PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html)
  + [PostgreSQL EXPLAIN plan](https://www.postgresql.org/docs/current/sql-explain.html)
+ AWS SCT
  + [AWS Schema Conversion Tool Overview](https://aws.amazon.com/dms/schema-conversion-tool/)
  + [AWS SCT User Guide](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
  + [Using the AWS SCT user interface](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html)
  + [Using Oracle Database as a source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html)
+ Extensions for AWS SCT
  + [Using the AWS SCT extension pack](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_ExtensionPack.html)
  + [Oracle functionality (en)](https://postgres.cz/wiki/Oracle_functionality_(en))
  + [PGXN orafce](https://pgxn.org/dist/orafce/)
  + [GitHub orafce](https://github.com/orafce/orafce)

## Additional information
<a name="migrate-oracle-native-functions-to-postgresql-using-extensions-additional"></a>

For more information, follow the detailed commands, with syntax and examples, for manually converting code in the attached document.

## Attachments
<a name="attachments-158847bb-27ef-4915-a9ca-7d87073792c1"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/158847bb-27ef-4915-a9ca-7d87073792c1/attachments/attachment.zip)

# Migrate a Db2 database from Amazon EC2 to Aurora MySQL-Compatible by using AWS DMS
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms"></a>

*Pinesh Singal, Amazon Web Services*

## Summary
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-summary"></a>

After you migrate your [IBM Db2 for LUW database](https://www.ibm.com/docs/en/db2/11.5?topic=federation) to [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/), consider re-architecting the database by moving to an Amazon Web Services (AWS) cloud-native database. This pattern covers migrating an IBM [Db2](https://www.ibm.com/docs/en/db2/11.5) for LUW database running on an [Amazon](https://docs.aws.amazon.com/ec2/) EC2 instance to an [Amazon Aurora MySQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraMySQL.html) database on AWS.  

The pattern describes an online migration strategy with minimal downtime for a multi-terabyte Db2 source database with a high number of transactions. 

This pattern uses [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) to convert the Db2 database schema to an Aurora MySQL-Compatible schema. Then the pattern  uses [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) to migrate data from the Db2 database to the Aurora MySQL-Compatible database. Manual conversions will be required for the code that isn’t converted by AWS SCT.

## Prerequisites and limitations
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-prereqs"></a>

**Prerequisites**
+ An active AWS account with a virtual private cloud (VPC)
+ AWS SCT
+ AWS DMS

**Product versions**
+ AWS SCT latest version
+ Db2 for Linux version 11.1.4.4 and later

## Architecture
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-architecture"></a>

**Source technology stack**
+ DB2/Linux x86-64 bit mounted on an EC2 instance 

** Target technology stack**
+ An Amazon Aurora MySQL-Compatible Edition database instance

**Source and target architecture**

The following diagram shows the data migration architecture between the source Db2 and target Aurora MySQL-Compatible databases. The architecture on the AWS Cloud includes a virtual private cloud (VPC) (Virtual Private Cloud), an Availability Zone, a public subnet for the Db2 instance and the AWS DMS replication instance, and a private subnet for the Aurora MySQL-Compatible database.

![\[Architecture of data migration between source Db2 and target Aurora MySQL-Compatible databases.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5abfccc4-148c-4794-8d80-e3c122679125/images/f30664f8-2d6a-4448-8d5c-cff3988a52c7.png)


## Tools
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-tools"></a>

**AWS services**
+ [Amazon Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) is a fully managed relational database engine that's built for the cloud and compatible with MySQL and PostgreSQL.
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database. AWS SCT supports as a source IBM Db2 for LUW versions 9.1, 9.5, 9.7, 10.1, 10.5, 11.1, and 11.5.

## Best practices
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-best-practices"></a>

For best practices, see [Best practices for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html).

## Epics
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-epics"></a>

### Configure the source IBM Db2 database
<a name="configure-the-source-ibm-db2-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the IBM Db2 database on Amazon EC2. | You can create an IBM Db2 database on an EC2 instance by using an Amazon Machine Image (AMI) from AWS Marketplace or by installing Db2 software on an EC2 instance.Launch an EC2 instance by selecting an AMI for IBM Db2 (for example, [IBM Db2 v11.5.7 RHEL 7.9](https://aws.amazon.com/marketplace/pp/prodview-aclrjj4hq2ols?sr=0-1&ref_=beagle&applicationId=AWS-EC2-Console)), which is similar to an on-premises database. | DBA, General AWS | 
| Configure security groups. | Configure the VPC security group inbound rules for SSH (Secure Shell) and TCP with port 22 and 50000, respectively. | General AWS | 
| Create the database instance. | Create a new instance (user) and database (schema), or use the default `db2inst1` instance and sample database.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | DBA | 
| Confirm that the Db2 DB instance is available. | To confirm that the Db2 database instance is up and running, use the `Db2pd -` command. | DBA | 

### Configure the target Aurora MySQL-Compatible database
<a name="configure-the-target-aurora-mysql-compatible-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Aurora MySQL-Compatible database. | Create an Amazon Aurora with MySQL compatibility Database from AWS RDS service[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | DBA, General AWS | 
| Configure security groups. | Configure the VPC security group inbound rules for SSH and TCP connections. | General AWS | 
| Confirm that the Aurora database is available. | To make sure that the Aurora MySQL-Compatible database is up and running, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | DBA | 

### Configure and run AWS SCT
<a name="configure-and-run-aws-sct"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install AWS SCT. | Download and install the latest version of [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html) (the current latest version 1.0.628). | General AWS | 
| Configure AWS SCT. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | General AWS | 
| Create an AWS SCT project. | Create an AWS SCT project and report that uses Db2 for LUW as the source DB engine and Aurora MySQL-Compatible for the target DB engine.To identify the privileges needed to connect to a Db2 for LUW database, see [Using Db2 LUW as a source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.DB2LUW.html). | General AWS | 
| Validate the objects. | Choose **Load schema**, validate the objects. Update any incorrect objects on the target database:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | DBA, General AWS | 

### Configure and run AWS DMS
<a name="configure-and-run-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a replication instance. | Sign in to the AWS Management Console, navigate to the AWS DMS service, and create a replication instance with valid settings for the VPC security group that you configured for the source and target databases. | General AWS | 
| Create endpoints. | Create the source endpoint for the Db2 database, and create the target endpoint for the Aurora MySQL-Compatible database:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | General AWS | 
| Create migration tasks. | Create a single migration task or multiple migration tasks for full load and CDC or Data validation:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | General AWS | 
| Plan the production run. | Confirm downtime with stakeholders such as application owners to run AWS DMS in production systems. | Migration lead | 
| Run the migration tasks. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | General AWS | 
| Validate the data. | Review migration task results and data in the source Db2 and target MySQL databases:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | DBA | 
| Stop migration tasks. | After data validation is successfully completed, stop the validation migration tasks. | General AWS | 

## Troubleshooting
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| AWS SCT source and target test connections are failing. | Configure JDBC driver versions and VPC security group inbound rules to accept the incoming traffic. | 
| The Db2 source endpoint test run fails. | Configure the extra connection setting `CurrentLSN=<scan>;`. | 
| The AWSDMS task fails to connect to the Db2 source, and the following error is returned.`database is recoverable if either or both of the database configuration parameters LOGARCHMETH1 and LOGARCHMETH2 are set to ON` | To avoid the error, run the following commands:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms.html) | 

## Related resources
<a name="migrate-a-db2-database-from-amazon-ec2-to-aurora-mysql-compatible-by-using-aws-dms-resources"></a>

**Amazon EC2**
+ [Amazon EC2](https://aws.amazon.com/ec2/)
+ [Amazon EC2 User Guides](https://docs.aws.amazon.com/ec2/)

**Databases**
+ [IBM Db2 Database](https://www.ibm.com/products/db2-database)
+ [Amazon Aurora](https://aws.amazon.com/rds/aurora/)
+ [Working with Amazon Aurora MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraMySQL.html)

**AWS SCT**
+ [AWS DMS Schema Conversion](https://aws.amazon.com/dms/schema-conversion-tool/)
+ [AWS Schema Conversion Tool User Guide](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
+ [Using the AWS SCT user interface](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html)
+ [Using IBM Db2 LUW as a source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.DB2LUW.html)

**AWS DMS**
+ [AWS Database Migration Service](https://aws.amazon.com/dms/)
+ [AWS Database Migration Service User Guide](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
+ [Sources for data migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html)
+ [Targets for data migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.html)
+ [AWS Database Migration Service and AWS Schema Conversion Tool now support IBM Db2 LUW as a source](https://aws.amazon.com/blogs/database/aws-database-migration-service-and-aws-schema-conversion-tool-now-support-ibm-db2-as-a-source/) (blog post)
+ [Migrating Applications Running Relational Databases to AWS](https://d1.awsstatic.com/whitepapers/Migration/migrating-applications-to-aws.pdf)

# Migrate a Microsoft SQL Server database from Amazon EC2 to Amazon DocumentDB by using AWS DMS
<a name="migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms"></a>

*Umamaheswara Nooka, Amazon Web Services*

## Summary
<a name="migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms-summary"></a>

This pattern describes how to use AWS Database Migration Service (AWS DMS) to migrate a Microsoft SQL Server database hosted on an Amazon Elastic Compute Cloud (Amazon EC2) instance to an Amazon DocumentDB (with MongoDB compatibility) database.

The AWS DMS replication task reads the table structure of the SQL Server database, creates the corresponding collection in Amazon DocumentDB, and performs a full-load migration.

You can also use this pattern to migrate an on-premises SQL Server or an Amazon Relational Database Service (Amazon RDS) for SQL Server DB instance to Amazon DocumentDB. For more information, see the guide [Migrating Microsoft SQL Server databases to the AWS Cloud](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-sql-server/welcome.html) on the AWS Prescriptive Guidance website.

## Prerequisites and limitations
<a name="migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing SQL Server database on an EC2 instance.
+ Fixed database (**db\$1owner**) role assigned to AWS DMS in the SQL Server database. For more information, see [Database-level roles](https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver15) in the SQL Server documentation. 
+ Familiarity with using the `mongodump`, `mongorestore`, `mongoexport`, and `mongoimport` utilities to [move data in and out of an Amazon DocumentDB cluster](https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-dump_restore_import_export_data.html).
+ [Microsoft SQL Server Management Studio](https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-ver15), installed and configured.

**Limitations **
+ The cluster size limit in Amazon DocumentDB is 64 TB. For more information, see [Cluster limits](https://docs.aws.amazon.com/documentdb/latest/developerguide/limits.html#limits-cluster) in the Amazon DocumentDB documentation. 
+ AWS DMS doesn't support the merging of multiple source tables into a single Amazon DocumentDB collection.
+ If AWS DMS processes any changes from a source table without a primary key, it will ignore large object (LOB) columns in the source table.

## Architecture
<a name="migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms-architecture"></a>

**Source technology stack  **
+ Amazon EC2

**Target technology stack  **
+ Amazon DocumentDB

**Target architecture**

![\[AWS Cloud architecture showing VPC with private DB subnet and components for SQL Server and DocumentDB.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/f186220b-5a94-48b2-840d-f04aedf51651/images/00962b85-8b71-49df-b84a-3adcbc9ad3a3.png)


## Tools
<a name="migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms-tools"></a>
+ [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html) – AWS Database Migration Service (AWS DMS) helps you migrate databases easily and securely.
+ [Amazon DocumentDB](https://docs.aws.amazon.com/documentdb/latest/developerguide/get-started-guide.html) – Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and fully managed database service.
+ [Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) – Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud.
+ [Microsoft SQL Server](https://docs.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver15) – SQL Server is a relational database management system.
+ [SQL Server Management Studio (SSMS)](https://docs.microsoft.com/en-us/sql/ssms/sql-server-management-studio-ssms?view=sql-server-ver15) – SSMS is a tool for managing SQL Server, including accessing, configuring, and administering SQL Server components.

## Epics
<a name="migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms-epics"></a>

### Create and configure a VPC
<a name="create-and-configure-a-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | Sign in to the AWS Management Console and open the Amazon VPC console. Create a virtual private cloud (VPC) with an IPv4 CIDR block range. | System administrator | 
| Create security groups and network ACLs. | On the Amazon VPC console, create security groups and network access control lists (network ACLs) for your VPC, according to your requirements. You can also use the default settings for these configurations. For more information about this and other stories, see the "Related resources" section. | System administrator | 

### Create and configure the Amazon DocumentDB cluster
<a name="create-and-configure-the-amazon-documentdb-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Create an Amazon DocumentDB cluster. | Open the Amazon DocumentDB console and choose "Clusters." Choose "Create," and create an Amazon DocumentDB cluster with one instance. Important: Make sure you configure this cluster with your VPC’s security groups. | System administrator  | 
|  Install the mongo shell. | The mongo shell is a command-line utility that you use to connect to and query your Amazon DocumentDB cluster. To install it, run the "/etc/yum.repos.d/mongodb-org-3.6.repo" command to create the repository file. Run the "sudo yum install -y mongodb-org-shell" command to install the mongo shell. To encrypt data in transit, download the public key for Amazon DocumentDB, and then connect to your Amazon DocumentDB instance. For more information about these steps, see the "Related resources" section. | System administrator  | 
| Create a database in the Amazon DocumentDB cluster.  | Run the "use" command with the name of your database to create a database in your Amazon DocumentDB cluster. | System administrator  | 

### Create and configure the AWS DMS replication instance
<a name="create-and-configure-the-aws-dms-replication-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the AWS DMS replication instance. | Open the AWS DMS console and choose "Create replication instance." Enter a name and description for your replication task. Choose the instance class, engine version, storage, VPC, Multi-AZ, and make it publicly accessible. Choose the "Advanced" tab to set the network and encryption settings. Specify the maintenance settings, and then choose "Create replication instance." | System administrator  | 
| Configure the SQL Server database.  | Log in to Microsoft SQL Server and add an inbound rule for communication between the source endpoint and the AWS DMS replication instance. Use the replication instance’s private IP address as the source. Important: The replication instance and target endpoint should be on the same VPC. Use an alternative source in the security group if the VPCs are different for the source and replication instances. | System administrator  | 

### Create and test the source and target endpoints in AWS DMS
<a name="create-and-test-the-source-and-target-endpoints-in-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the source and target database endpoints. | Open the AWS DMS console and choose "Connect source and target database endpoints." Specify the connection information for the source and target databases. If required, choose the "Advanced" tab to set values for "Extra connection attributes." Download and use the certificate bundle in your endpoint configuration. | System administrator  | 
| Test the endpoint connection.  | Choose "Run test" to test the connection. Troubleshoot any error messages by verifying the security group settings and the connections to the AWS DMS replication instance from both the source and target database instances. | System administrator  | 

### Migrate data
<a name="migrate-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the AWS DMS migration task.  | On the AWS DMS console, choose "Tasks," "Create task." Specify the task options, including the source and destination endpoint names, and replication instance names. Under "Migration type" choose "Migrate existing data," and "Replicate data changes only." Choose "Start task." | System administrator  | 
| Run the AWS DMS migration task. | Under "Task settings," specify the settings for the table preparation mode, such as "Do nothing," "Drop tables on target," "Truncate," and "Include LOB columns in replication." Set a maximum LOB size that AWS DMS will accept and choose "Enable logging." Leave the "Advanced settings" at their default values and choose "Create task." | System administrator  | 
| Monitor the migration. | On the AWS DMS console, choose "Tasks" and choose your migration task. Choose "Task monitoring" to monitor your task. The task stops when the full-load migration is complete and cached changes are applied. | System administrator  | 

### Test and verify the migration
<a name="test-and-verify-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Connect to the Amazon DocumentDB cluster by using the mongo shell. | Open the Amazon DocumentDB console, choose your cluster under "Clusters." In the "Connectivity and Security" tab, choose "Connect to this cluster with the mongo shell." | System administrator  | 
| Verify the results of your migration. | Run the "use" command with the name of your database and then run the "show collections" command. Run the "db. .count();" command with the name of your database. If the results match your source database, then your migration was successful. | System administrator  | 

## Related resources
<a name="migrate-a-microsoft-sql-server-database-from-amazon-ec2-to-amazon-documentdb-by-using-aws-dms-resources"></a>

**Create and configure a VPC **
+ [Create a security group for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#CreatingSecurityGroups)
+ [Create a network ACL](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html)

** **

**Create and configure the Amazon DocumentDB cluster**
+ [Create an Amazon DocumentDB cluster](https://docs.aws.amazon.com/documentdb/latest/developerguide/get-started-guide.html#cloud9-cluster)
+ [Install the mongo shell for Amazon DocumentDB ](https://docs.aws.amazon.com/documentdb/latest/developerguide/get-started-guide.html#cloud9-mongoshell)
+ [Connect to your Amazon DocumentDB cluster](https://docs.aws.amazon.com/documentdb/latest/developerguide/get-started-guide.html#cloud9-connectcluster)

** **

**Create and configure the AWS DMS replication instance **
+ [Use public and private replication instances](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html#CHAP_ReplicationInstance.PublicPrivate)

** **

**Create and test the source and target endpoints in AWS DMS **
+ [Use Amazon DocumentDB as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/target.docdb.html)
+ [Use a SQL Server database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html)
+ [Use AWS DMS endpoints](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.html)

** **

**Migrate data **
+ [Migrate to Amazon DocumentDB](https://docs.aws.amazon.com/documentdb/latest/developerguide/docdb-migration.html)

** **

**Other resources**
+ [Limitations on using SQL Server as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html#CHAP_Source.SQLServer.Limitations) 
+ [How to use Amazon DocumentDB to build and manage applications at scale](https://aws.amazon.com/blogs/database/how-to-use-amazon-documentdb-with-mongodb-compatibility-to-build-and-manage-applications-at-scale/)

# Migrate an on-premises ThoughtSpot Falcon database to Amazon Redshift
<a name="migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift"></a>

*Battulga Purevragchaa and Antony Prasad Thevaraj, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift-summary"></a>

On-premises data warehouses require significant administration time and resources, particularly for large datasets. The financial cost of building, maintaining, and growing these warehouses is also very high. To help manage costs, keep extract, transform, and load (ETL) complexity low, and deliver performance as your data grows, you must constantly choose which data to load and which data to archive.

By migrating your on-premises [ThoughtSpot Falcon databases](https://docs.thoughtspot.com/software/latest/data-caching) to the Amazon Web Services (AWS) Cloud, you can access cloud-based data lakes and data warehouses that increase your business agility, security, and application reliability, in addition to reducing your overall infrastructure costs. Amazon Redshift helps to significantly lower the cost and operational overhead of a data warehouse. You can also use Amazon Redshift Spectrum to analyze large amounts of data in its native format without data loading.

This pattern describes the steps and process for migrating a ThoughtSpot Falcon database from an on-premises data center to an Amazon Redshift database on the AWS Cloud.

## Prerequisites and limitations
<a name="migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A ThoughtSpot Falcon database hosted in an on-premises data center

**Product versions**
+ ThoughtSpot version 7.0.1 

## Architecture
<a name="migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift-architecture"></a>

![\[Migrating a ThoughtSpot Falcon database from an on-premises data center to Amazon Redshift.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b0ca29f4-b269-4b57-b386-738693a6b334/images/2b483990-1f30-439c-ba13-dc0cb0650360.png)


 

The diagram shows the following workflow:

1. Data is hosted in an on-premises relational database.

1. AWS Schema Conversion Tool (AWS SCT) converts the data definition language (DDL) that is compatible with Amazon Redshift.

1. After the tables are created, you can migrate the data by using AWS Database Migration Service (AWS DMS).

1. The data is loaded into Amazon Redshift.

1. The data is stored in Amazon Simple Storage Service (Amazon S3) if you use Redshift Spectrum or already host the data in Amazon S3.

## Tools
<a name="migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift-tools"></a>
+ [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) – AWS Data Migration Service (AWS DMS) helps you quickly and securely migrate databases to AWS.
+ [Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html) – Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools.
+ [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) – AWS Schema Conversion Tool (AWS SCT) converts your existing database schema from one database engine to another.

## Epics
<a name="migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift-epics"></a>

### Prepare for the migration
<a name="prepare-for-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify the appropriate Amazon Redshift configuration. | Identify the appropriate Amazon Redshift cluster configuration based on your requirements and data volume. For more information, see [Amazon Redshift clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) in the Amazon Redshift documentation. | DBA | 
| Research Amazon Redshift to evaluate if it meets your requirements. | Use the [Amazon Redshift FAQs](https://aws.amazon.com/redshift/faqs/) to understand and evaluate whether Amazon Redshift meets your requirements. | DBA | 

### Prepare the target Amazon Redshift cluster
<a name="prepare-the-target-amazon-redshift-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon Redshift cluster. | Sign in to the AWS Management Console, open the Amazon Redshift console, and then create an Amazon Redshift cluster in a virtual private cloud (VPC). For more information, see [Creating a cluster in a VPC](https://docs.aws.amazon.com/redshift/latest/mgmt/getting-started-cluster-in-vpc.html) in the Amazon Redshift documentation. | DBA | 
| Conduct a PoC for your Amazon Redshift database design. | Follow Amazon Redshift best practices by conducting a proof of concept (PoC) for your database design. For more information, see [Conducting a proof of concept for Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/proof-of-concept-playbook.html) in the Amazon Redshift documentation. | DBA | 
| Create database users. | Create the users in your Amazon Redshift database and grant the appropriate roles for access to the schema and tables.  For more information, see [Grant access privileges for a user or user group](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html) in the Amazon Redshift documentation. | DBA | 
| Apply configuration settings to the target database. | Apply configuration settings to the Amazon Redshift database according to your requirements. For more information about enabling database, session, and server-level parameters, see the [Configuration reference](https://docs.aws.amazon.com/redshift/latest/dg/cm_chap_ConfigurationRef.html) in the Amazon Redshift documentation. | DBA | 

### Create objects in the Amazon Redshift cluster
<a name="create-objects-in-the-amazon-redshift-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Manually create tables with DDL in Amazon Redshift. | (Optional) If you use AWS SCT, the tables are automatically created. However, if there are failures when replicating DDLs, you have to manually create the tables | DBA | 
| Create external tables for Redshift Spectrum. | Create an external table with an external schema for Amazon Redshift Spectrum. To create external tables, you must be the owner of the external schema or a [database superuser](https://docs.aws.amazon.com/redshift/latest/dg/r_superusers.html). For more information, see [Creating external tables for Amazon Redshift Spectrum](https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html) in the Amazon Redshift documentation. | DBA | 

### Migrate data using AWS DMS
<a name="migrate-data-using-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Use AWS DMS to migrate the data. | After you create the DDL of the tables in the Amazon Redshift database, migrate your data to Amazon Redshift by using AWS DMS.For detailed steps and instructions, see [Using an Amazon Redshift database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html) in the AWS DMS documentation. | DBA | 
| Use the COPY command to load the data. | Use the Amazon Redshift `COPY` command to load the data from Amazon S3 to Amazon Redshift.For more information, see [Using the COPY command to load from Amazon S3](https://docs.aws.amazon.com/redshift/latest/dg/t_loading-tables-from-s3.html) in the Amazon Redshift documentation. | DBA | 

### Validate the Amazon Redshift cluster
<a name="validate-the-amazon-redshift-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target records.  | Validate the table count for the source and target records that were loaded from your source system. | DBA | 
| Implement Amazon Redshift best practices for performance tuning. | Implement Amazon Redshift best practices for table and database design. For more information, see the blog post [Top 10 performance tuning techniques for Amazon Redshift](https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-techniques-for-amazon-redshift/). | DBA | 
| Optimize query performance. | Amazon Redshift uses SQL-based queries to interact with data and objects in the system. Data manipulation language (DML) is the subset of SQL that you can use to view, add, change, and delete data. DDL is the subset of SQL that you use to add, change, and delete database objects such as tables and views.For more information, see [Tuning query performance](https://docs.aws.amazon.com/redshift/latest/dg/c-optimizing-query-performance.html) in the Amazon Redshift documentation. | DBA | 
| Implement WLM.  | You can use workload management (WLM) to define multiple query queues and route queries to appropriate queues at runtime.For more information, see [Implementing workload management](https://docs.aws.amazon.com/redshift/latest/dg/cm-c-implementing-workload-management.html) in the Amazon Redshift documentation. | DBA | 
| Work with concurrency scaling. | By using the Concurrency Scaling feature, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance.For more information, see [Working with concurrency scaling](https://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html) in the Amazon Redshift documentation. | DBA | 
| Use Amazon Redshift best practices for table design. | When you plan your database, certain important table design decisions can strongly influence overall query performance.For more information about choosing the most appropriate table design option, see [Amazon Redshift best practices for designing tables](https://docs.aws.amazon.com/redshift/latest/dg/c_designing-tables-best-practices.html) in the Amazon Redshift documentation. | DBA | 
| Create materialized views in Amazon Redshift. | A materialized view contains a precomputed results set based on an SQL query over one or more base tables. You can issue `SELECT` statements to query a materialized view in the same way that you query other tables or views in the database.For more information, see [Creating materialized views in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-overview.html) in the Amazon Redshift documentation. | DBA | 
| Define joins between the tables. | To search more than one table at the same time in ThoughtSpot, you must define joins between the tables by specifying columns that contain matching data across two tables. These columns represent the `primary key` and `foreign key` of the join.You can define them by using the `ALTER TABLE` command in Amazon Redshift or ThoughtSpot. For more information, see [ALTER TABLE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE.html) in the Amazon Redshift documentation. | DBA | 

### Set up ThoughtSpot connection to Amazon Redshift
<a name="set-up-thoughtspot-connection-to-amazon-redshift"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Add an Amazon Redshift connection. | Add an Amazon Redshift connection to your on-premises ThoughtSpot Falcon database.For more information, see [Add an Amazon Redshift connection](https://cloud-docs.thoughtspot.com/admin/ts-cloud/ts-cloud-embrace-redshift-add-connection.html) in the ThoughtSpot documentation. | DBA | 
| Edit the Amazon Redshift connection. | You can edit the Amazon Redshift connection to add tables and columns.For more information, see [Edit an Amazon Redshift connection](https://cloud-docs.thoughtspot.com/admin/ts-cloud/ts-cloud-embrace-redshift-edit-connection.html) in the ThoughtSpot documentation. | DBA | 
| Remap the Amazon Redshift connection. | Modify the connection parameters by editing the source mapping .yaml file that was created when you added the Amazon Redshift connection. For example, you can remap the existing table or column to a different table or column in an existing database connection. ThoughtSpot recommends that you check the dependencies before and after you remap a table or column in a connection to ensure that they display as required.For more information, see [Remap an Amazon Redshift connection](https://cloud-docs.thoughtspot.com/admin/ts-cloud/ts-cloud-embrace-redshift-remap-connection.html) in the ThoughtSpot documentation. | DBA | 
| Delete a table from the Amazon Redshift connection.  | (Optional) If you attempt to remove a table in an Amazon Redshift connection, ThoughtSpot checks for dependencies and shows a list of dependent objects. You can choose the listed objects to delete them or remove the dependency. You can then remove the table.For more information, see [Delete a table from an Amazon Redshift connection](https://cloud-docs.thoughtspot.com/admin/ts-cloud/ts-cloud-embrace-redshift-delete-table.html) in the ThoughtSpot documentation. | DBA | 
|  Delete a table with dependent objects from an Amazon Redshift connection. | (Optional) If you try to delete a table with dependent objects, the operation is blocked. A `Cannot delete` window appears, with a list of links to dependent objects. When all the dependencies are removed, you can then delete the tableFor more information, see [Delete a table with dependent objects from an Amazon Redshift connection](https://cloud-docs.thoughtspot.com/admin/ts-cloud/ts-cloud-embrace-redshift-delete-table-dependencies.html) in the ThoughtSpot documentation. | DBA | 
| Delete an Amazon Redshift connection. | (Optional) Because a connection can be used in multiple data sources or visualizations, you must delete all of the sources and tasks that use that connection before you can delete the Amazon Redshift connection.For more information, see [Delete an Amazon Redshift connection](https://cloud-docs.thoughtspot.com/admin/ts-cloud/ts-cloud-embrace-redshift-delete-connection.html) in the ThoughtSpot documentation. | DBA | 
|  Check connection reference for Amazon Redshift. | Make sure that you provide the required information for your Amazon Redshift connection by using the [Connection reference](https://cloud-docs.thoughtspot.com/admin/ts-cloud/ts-cloud-embrace-redshift-connection-reference.html) in the ThoughtSpot documentation. | DBA | 

## Additional information
<a name="migrate-an-on-premises-thoughtspot-falcon-database-to-amazon-redshift-additional"></a>
+ [AI-driven analytics at any scale with ThoughtSpot and Amazon Redshift](https://aws.amazon.com/blogs/apn/ai-driven-analytics-at-any-scale-with-thoughtspot-and-amazon-redshift/)
+ [Amazon Redshift pricing](https://aws.amazon.com/redshift/pricing/)
+ [Getting started with AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_GettingStarted.html) 
+ [Getting started with Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html)
+ [Using data extraction agents](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.html)
+ [Chick-fil-A improves speed to insight with ThoughtSpot and AWS](https://www.thoughtspot.com/sites/default/files/pdf/ThoughtSpot-Chick-fil-A-AWS-Case-Study.pdf) 

# Migrate from Oracle Database to Amazon RDS for PostgreSQL by using Oracle GoldenGate
<a name="migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate"></a>

*Dhairya Jindani, Sindhusha Paturu, and Rajeshkumar Sabankar, Amazon Web Services*

## Summary
<a name="migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate-summary"></a>

This pattern shows how to migrate an Oracle database to Amazon Relational Database Service (Amazon RDS) for PostgreSQL by using Oracle Cloud Infrastructure (OCI) GoldenGate.

By using Oracle GoldenGate, you can replicate data between your source database and one or more destination databases with minimal downtime.

**Note**  
The source Oracle database can be either on-premises or on an Amazon Elastic Compute Cloud (Amazon EC2) instance. You can use a similar procedure when using on-premises replication tools.

## Prerequisites and limitations
<a name="migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An Oracle GoldenGate license
+ Java Database Connectivity (JDBC) driver to connect to the PostgreSQL database
+ Schema and tables created with the [AWS Schema Conversion Tool (AWS SCT)](https://aws.amazon.com/dms/schema-conversion-tool/) on the target Amazon RDS for PostgreSQL database

**Limitations**
+ Oracle GoldenGate can replicate existing table data (initial load) and ongoing changes (change data capture) only

**Product versions**
+ Oracle Database Enterprise Edition 10g or newer versions 
+ Oracle GoldenGate 12.2.0.1.1 for Oracle or newer versions
+ Oracle GoldenGate 12.2.0.1.1 for PostgreSQL or newer versions

## Architecture
<a name="migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate-architecture"></a>

The following diagram shows an example workflow for migrating an Oracle database to Amazon RDS for PostgreSQL by using Oracle GoldenGate:

![\[Migration workflow from on-premises Oracle database to Amazon RDS for PostgreSQL.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/384f0eaf-8582-474a-a7f4-ec1048a4feb3/images/de541887-0d5f-4a9a-b136-ce2599355cb8.png)


The diagram shows the following workflow:

1. The Oracle GoldenGate [Extract process](https://docs.oracle.com/goldengate/c1230/gg-winux/GGCON/processes-and-terminology.htm#GUID-6419F3A9-71EC-4D14-9C41-3BAA1E3CA19C) runs against the source database to extract data.

1. The Oracle GoldenGate [Replicat process](https://docs.oracle.com/goldengate/c1230/gg-winux/GGCON/processes-and-terminology.htm#GUID-5EF0326C-9058-4C40-8925-98A223388C95) delivers the extracted data to the target Amazon RDS for PostgreSQL database.

## Tools
<a name="migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate-tools"></a>
+ [Oracle GoldenGate](https://www.oracle.com/integration/goldengate/#:~:text=OCI%20GoldenGate%20is%20a%20real,in%20the%20Oracle%20Cloud%20Infrastructure.) helps you design, run, orchestrate, and monitor your data replication and stream data processing solutions in the Oracle Cloud Infrastructure.
+ [Amazon Relational Database Service (Amazon RDS) for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html) helps you set up, operate, and scale a PostgreSQL relational database in the AWS Cloud.

## Epics
<a name="migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate-epics"></a>

### Download and install Oracle GoldenGate
<a name="download-and-install-oracle-goldengate"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download Oracle GoldenGate. | Download the following versions of Oracle GoldenGate:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate.html)To download the software, see [Oracle GoldenGate Downloads](https://www.oracle.com/middleware/technologies/goldengate-downloads.html) on the Oracle website. | DBA | 
| Install Oracle GoldenGate for Oracle on the source Oracle Database server. | For instructions, see the [Oracle GoldenGate documentation](https://docs.oracle.com/goldengate/1212/gg-winux/GIORA/toc.htm). | DBA | 
| Install Oracle GoldenGate for PostgreSQL database on the Amazon EC2 instance. | For instructions, see the [Oracle GoldenGate documentation](https://docs.oracle.com/goldengate/1212/gg-winux/GIORA/toc.htm). | DBA | 

### Configure Oracle GoldenGate on the source and target databases
<a name="configure-oracle-goldengate-on-the-source-and-target-databases"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Oracle GoldenGate for Oracle Database on the source database. | For instructions, see the [Oracle GoldenGate documentation](https://docs.oracle.com/goldengate/1212/gg-winux/GIORA/toc.htm).Make sure that you configure the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate.html) | DBA | 
| Set up Oracle GoldenGate for PostgreSQL on the target database. | For instructions, see [Part VI Using Oracle GoldenGate for PostgreSQL](https://docs.oracle.com/en/middleware/goldengate/core/19.1/gghdb/using-oracle-goldengate-postgresql.html) on the Oracle website.Make sure that you configure the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate.html) | DBA | 

### Configure the data capture
<a name="configure-the-data-capture"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the Extract process in the source database. | In the source Oracle Database, create an extract file to extract data.For instructions, see [ADD EXTRACT](https://docs.oracle.com/goldengate/1212/gg-winux/GWURF/ggsci_commands006.htm#GWURF122) in the Oracle documentation.The extract file includes the creation of the extract parameter file and trail file directory. | DBA | 
| Set up a data pump to transfer the trail file from the source to the target database. | Create an EXTRACT parameter file and trail file directory by following the instructions in [PARFILE](https://docs.oracle.com/database/121/SUTIL/GUID-7A045C82-5993-44EB-AFAD-B7D39C34BCCD.htm#SUTIL859) in *Database Utilities* on the Oracle website.For more information, see [What is a Trail?](https://docs.oracle.com/goldengate/c1230/gg-winux/GGCON/processes-and-terminology.htm#GUID-88674F53-1E07-4C00-9868-598F82D7113C) in *Fusion Middleware Understanding Oracle GoldenGate* on the Oracle website. | DBA | 
| Set up replication on the Amazon EC2 instance. | Create a replication parameter file and trail file directory.For more information about creating replication parameter files, see section [3.5 Validating a parameter file](https://docs.oracle.com/en/middleware/goldengate/core/21.3/admin/using-oracle-goldengate-parameter-files.html#GUID-1E32A9AD-25DB-4243-93CD-E643E7116215) in the Oracle Database documentation.For more information about creating a trail file directory, see [Creating a trail](https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/creating-trail.html) in the Oracle Cloud documentation.Make sure that you add a checkpoint table entry in the GLOBALS file at the target.For more information, see [What is a Replicat?](https://docs.oracle.com/goldengate/c1230/gg-winux/GGCON/processes-and-terminology.htm#GGCON-GUID-5EF0326C-9058-4C40-8925-98A223388C95) in *Fusion Middleware Understanding Oracle GoldenGate* on the Oracle website. | DBA | 

### Configure the data replication
<a name="configure-the-data-replication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| In the source database, create a parameter file to extract data for the initial load. | Follow the instructions in [Creating a parameter file in GGSCI](https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/using-oracle-goldengate-parameter-files.html#GUID-5C49C522-8B28-4E4B-908D-66A33717CE6C) in the Oracle Cloud documentation.Make sure that the Manager is running on the target. | DBA | 
| In the target database, create a parameter file to replicate data for the initial load. | Follow the instructions in [Creating a parameter file in GGSCI](https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/using-oracle-goldengate-parameter-files.html#GUID-5C49C522-8B28-4E4B-908D-66A33717CE6C) in the Oracle Cloud documentation.Make sure that you add and start the Replicat process. | DBA | 

### Cut over to the Amazon RDS for PostgreSQL database
<a name="cut-over-to-the-amazon-rds-for-postgresql-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Stop the Replicat process and make sure that the source and target databases are in sync. | Compare row counts between the source and target databases to make sure that the data replication was successful. | DBA | 
| Configure data definition language (DDL) support. | Run the DDL script for creating triggers, sequence, synonyms, and referential keys on PostgreSQL.You can use any standard SQL client application to connect to a database in your DB cluster. For example, you can use [pgAdmin](https://www.pgadmin.org/) to connect to your DB instance. | DBA | 

## Related resources
<a name="migrate-from-oracle-database-to-amazon-rds-for-postgresql-by-using-oracle-goldengate-resources"></a>
+ [Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html) (*Amazon RDS User Guide*)
+ [Amazon EC2 documentation](https://docs.aws.amazon.com/ec2/)
+ [Oracle GoldenGate supported processing methods and databases](https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_about_gg.htm#GWUAD112) (Oracle documentation)

# Migrate an Oracle partitioned table to PostgreSQL by using AWS DMS
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms"></a>

*Saurav Mishra and Eduardo Valentim, Amazon Web Services*

## Summary
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-summary"></a>

This pattern describes how to speed up loading a partitioned table from Oracle to PostgreSQL by using AWS Database Migration Service (AWS DMS), which doesn't support native partitioning. The target PostgreSQL database can be installed on Amazon Elastic Compute Cloud (Amazon EC2), or it can be an Amazon Relational Database Service (Amazon RDS) for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition DB instance. 

Uploading a partitioned table includes the following steps:

1. Create a parent table similar to the Oracle partition table, but don't include any partition.

1. Create child tables that will inherit from the parent table that you created in step 1.

1. Create a procedure function and trigger to handle the inserts in the parent table.

However, because the trigger is fired for every insert, the initial load using AWS DMS can be very slow.

To speed up initial loads from Oracle to PostgreSQL 9.0, this pattern creates a separate AWS DMS task for each partition and loads the corresponding child tables. You then create a trigger during cutover. 

PostgreSQL version 10 supports native partitioning. However, you might decide to use inherited partitioning in some cases. For more information, see the [Additional information](#migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-additional) section.

## Prerequisites and limitations
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A source Oracle database with a partitioned table
+ A PostgreSQL database on AWS

**Product versions**
+ PostgreSQL 9.0

## Architecture
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-architecture"></a>

**Source technology stack**
+ A partitioned table in Oracle

**Target technology stack**
+ A partitioned table in PostgreSQL (on Amazon EC2, Amazon RDS for PostgreSQL, or Aurora PostgreSQL)

**Target architecture**

![\[Partitioned table data in Oracle moving to an AWS DMS task for each partition, then into PostgreSQL.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7fa2898e-3308-436a-aec8-ab6f680d7bac/images/1b9742ea-a13d-434c-83a7-56686cf76ea0.png)


## Tools
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-tools"></a>
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.

## Epics
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-epics"></a>

### Set up AWS DMS
<a name="set-up-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the tables in PostgreSQL. | Create the parent and corresponding child tables in PostgreSQL with the required check conditions for partitions. | DBA | 
| Create the AWS DMS task for each partition. | Include the filter condition of the partition in the AWS DMS task. Map the partitions to the corresponding PostgreSQL child tables. | DBA | 
| Run the AWS DMS tasks using full load and change data capture (CDC). | Make sure that the `StopTaskCachedChangesApplied` parameter is set to `true` and the `StopTaskCachedChangesNotApplied` parameter is set to `false`. | DBA | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Stop the replication tasks. | Before you stop the tasks, confirm that the source and destination are in sync. | DBA | 
| Create a trigger on the parent table. | Because the parent table will receive all insert and update commands, create a trigger that will route these commands to the respective child tables based on the partitioning condition. | DBA | 

## Related resources
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-resources"></a>
+ [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
+ [Table Partitioning (PostgreSQL documentation)](https://www.postgresql.org/docs/10/ddl-partitioning.html)

## Additional information
<a name="migrate-an-oracle-partitioned-table-to-postgresql-by-using-aws-dms-additional"></a>

Although PostgreSQL version 10 supports native partitioning, you might decide to use inherited partitioning for the following use cases:
+ Partitioning enforces a rule that all partitions must have the same set of columns as the parent, but table inheritance supports children having extra columns.
+ Table inheritance supports multiple inheritances.
+ Declarative partitioning supports only list and range partitioning. With table inheritance, you can divide the data as you want. However, if the constraint exclusion can't prune partitions effectively, query performance will suffer.
+ Some operations need a stronger lock when using declarative partitioning than when using table inheritance. For example, adding or removing a partition to or from a partitioned table requires an `ACCESS EXCLUSIVE` lock on the parent table, whereas a `SHARE UPDATE EXCLUSIVE` lock is enough for regular inheritance.

When you use separate job partitions, you can also reload partitions if there are any AWS DMS validation issues. For better performance and replication control, run tasks on separate replication instances.

# Migrate from Amazon RDS for Oracle to Amazon RDS for MySQL
<a name="migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql"></a>

*Jitender Kumar, Srini Ramaswamy, and Neha Sharma, Amazon Web Services*

## Summary
<a name="migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql-summary"></a>

This pattern provides guidance for migrating an Amazon Relational Database Service (Amazon RDS) for Oracle DB instance to an Amazon RDS for MySQL DB instance on Amazon Web Services (AWS). The pattern uses AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT). 

The pattern provides best practices for handling the migration of stored procedures. It also covers and code changes to support the application layer. 

## Prerequisites and limitations
<a name="migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An Amazon RDS for Oracle source database.
+ An Amazon RDS for MySQL target database. Source and target databases should be in the same virtual private cloud (VPC). If you’re using multiple VPCs, or you must have the required access permissions.
+ Security groups that allow connectivity between the source and target databases, AWS SCT, the application server, and AWS DMS.
+ A user account with the required privilege to run AWS SCT on the source database.
+ Supplemental logging enabled for running AWS DMS on the source database.

**Limitations**
+ The source and target Amazon RDS database size limit is 64 TB. For Amazon RDS size information, see the [AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html).
+ Oracle is case-insensitive for database objects, but MySQL is not. AWS SCT can handle this issue while creating an object. However, some manual work is required to support full case insensitivity.
+ This migration doesn't use MySQL extensions to enable Oracle-native functions. AWS SCT handles most of the conversion, but some work is required to change code manually.
+ Java Database Connectivity (JDBC) driver changes are required in the application.

**Product versions**
+ Amazon RDS for Oracle 12.2.0.1 and later. For currently supported RDS for Oracle versions, see the [AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Concepts.database-versions.html).
+ Amazon RDS for MySQL 8.0.15 and later. For currently supported RDS for MySQL versions, see the [AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Concepts.VersionMgmt.html).
+ AWS DMS version 3.3.0 and later. See the AWS documentation for more information about AWS DMS supported [source endpoints](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Sources.html) and [target endpoints](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.Targets.html).
+ AWS SCT version 1.0.628 and later.  See the [AWS SCT source and target endpoint support matrix](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) in the AWS documentation.

## Architecture
<a name="migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql-architecture"></a>

**Source technology stack**
+ Amazon RDS for Oracle. For more information, see [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html). 

**Target technology stack**
+ Amazon RDS for MySQL. For more information, see [Using a MySQL-Compatible database as a target for AWS DMS](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html).

**Migration architecture**

In the following diagram, AWS SCT copies and converts schema objects from the Amazon RDS for Oracle source database and sends the objects to the Amazon RDS for MySQL target database. AWS DMS replicates data from the source database and sends it to the Amazon RDS for MySQL instance.

![\[AWS SCT, AWS DMS, and Amazon RDS deployed in a private subnet.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e1efa7c2-47c1-4677-80bc-6b19250fc0d6/images/b54a8442-9ab9-4074-b8f6-a08f87fa2f52.jpeg)


## Tools
<a name="migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql-tools"></a>
+ [AWS Data Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud. This pattern uses [Amazon RDS for Oracle](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Oracle.html) and [Amazon RDS for MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html).
+ [AWS Schema Conversion Tool (AWS SCT)](http://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database.

## Epics
<a name="migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql-epics"></a>

### Prepare for migration
<a name="prepare-for-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target database versions and engines. |  | DBA | 
|  Identify hardware requirements for the target server instance. |  | DBA, SysAdmin | 
| Identify storage requirements (storage type and capacity). |  | DBA, SysAdmin | 
| Choose the proper instance type (capacity, storage features, network features). |  | DBA, SysAdmin | 
| Identify network-access security requirements for the source and target databases. |  | DBA, SysAdmin  | 
| Choose an application migration strategy. | Consider whether you want full downtime or partial downtime for cutover activities. | DBA, SysAdmin, App owner | 

### Configure infrastructure
<a name="configure-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC and subnets. |  | SysAdmin | 
| Create security groups and network access control lists (ACLs). |  | SysAdmin | 
| Configure and start the Amazon RDS for Oracle instance. |  | DBA, SysAdmin | 
| Configure and start the Amazon RDS for MySQL instance.  |  | DBA, SysAdmin | 
| Prepare a test case for validation of code conversion. | This will help in unit-testing for the converted code. | DBA, Developer | 
| Configure the AWS DMS instance. |  |  | 
| Configure source and target endpoints in AWS DMS. |  |  | 

### Migrate data
<a name="migrate-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate the target database script using AWS SCT. | Check the accuracy of the code that was converted by AWS SCT. Some manual work will be required. | DBA, Developer | 
| In AWS SCT, choose the "Case Insensitive" setting. | In AWS SCT, choose Project Settings, Target Case Sensitivity, Case Insensitive. | DBA, Developer | 
| In AWS SCT, choose not to use the Oracle native function. | In Project Settings, check the functions TO\$1CHAR/TO\$1NUMBER/TO\$1DATE. | DBA, Developer | 
| Make changes for "sql%notfound" code. | You might have to convert the code manually. |  | 
| Query on tables and objects in stored procedures (use lowercase queries). |  | DBA, Developer | 
| Create the primary script after all changes are made, and then deploy the primary script on the target database. |  | DBA, Developer | 
| Unit-test stored procedures and application calls using sample data.  |  |  | 
| Clean up data that was created during unit testing. |  | DBA, Developer | 
| Drop foreign key constraints on the target database. | This step is required to load initial data. If you don't want to drop the foreign key constraints, you must create a migration task for data specific to the primary and secondary tables. | DBA, Developer | 
| Drop primary keys and unique keys on the target database. | This step results in better performance for the initial load. | DBA, Developer | 
| Enable supplemental logging on the source database.  |  | DBA | 
| Create a migration task for the initial load in AWS DMS, and then run it. | Choose the option to migrate existing data. | DBA | 
| Add the primary keys and foreign keys to the target database. | Constraints need to be added after the initial load. | DBA, Developer | 
| Create a migration task for ongoing replication. | Ongoing replication keeps the target database synchronized with the source database. | DBA | 

### Migrate applications
<a name="migrate-applications"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Replace Oracle native functions with MySQL native functions. |  | App owner | 
| Make sure that only lowercase names are used for database objects in SQL queries. |  | DBA, SysAdmin, App owner | 

### Cut over to the target database
<a name="cut-over-to-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down the application server. |  | App owner | 
| Validate that the source and target databases are in sync. |  | DBA, App owner | 
| Stop the Amazon RDS for Oracle DB instance. |  | DBA | 
| Stop the migration task. | This will stop automatically after you complete the previous step. | DBA | 
| Change the JDBC connection from Oracle to MySQL. |  | App owner, DBA | 
| Start the application. |  | DBA, SysAdmin, App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review and validate the project documents. |  | DBA, SysAdmin | 
| Gather metrics about time to migrate, percentage of manual versus tool tasks, cost savings, etc. |  | DBA, SysAdmin | 
| Stop and delete AWS DMS instances. |  | DBA | 
| Remove the source and target endpoints. |  | DBA | 
| Remove migration tasks. |  | DBA | 
| Take a snapshot of the Amazon RDS for Oracle DB instance. |  | DBA | 
| Delete the Amazon RDS for Oracle DB instance. |  | DBA | 
| Shut down and delete any other temporary AWS resources you used. |  | DBA, SysAdmin | 
| Close the project and provide any feedback. |  | DBA | 

## Related resources
<a name="migrate-from-amazon-rds-for-oracle-to-amazon-rds-for-mysql-resources"></a>
+ [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
+ [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Welcome.html)
+ [Amazon RDS Pricing](https://aws.amazon.com/rds/pricing/)
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/)
+ [Getting Started with Amazon RDS](https://aws.amazon.com/rds/getting-started/)

# Migrate from IBM Db2 on Amazon EC2 to Aurora PostgreSQL-Compatible using AWS DMS and AWS SCT
<a name="migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct"></a>

*Sirsendu Halder and Abhimanyu Chhabra, Amazon Web Services*

## Summary
<a name="migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct-summary"></a>

This pattern provides guidance for migrating an IBM Db2 database on an Amazon Elastic Compute Cloud (Amazon EC2) instance to an Amazon Aurora PostgreSQL-Compatible Edition DB instance. This pattern uses AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT) for data migration and schema conversion.

The pattern targets an online migration strategy with little or no downtime for a multi-terabyte IBM Db2 database that has a high number of transactions. We recommend that you convert the columns in primary keys (PKs) and foreign keys (FKs) with the data type `NUMERIC` to `INT` or `BIGINT` in PostgreSQL for better performance. 

## Prerequisites and limitations
<a name="migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct-prereqs"></a>

**Prerequisites**
+ An active AWS account 
+ A source IBM Db2 database on an EC2 instance

**Product versions**
+ DB2/LINUXX8664 version 11.1.4.4 and later

## Architecture
<a name="migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct-architecture"></a>

**Source technology stack**** **
+ A Db2 database on an EC2 instance  

**Target technology stack**
+ An Aurora PostgreSQL-Compatible version 10.18 or later DB instance

**Database migration architecture**** **

![\[Using AWS DMS to migrate from IMB Db2 on Amazon EC2 to Aurora PostgreSQL-Compatible.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5e737fab-3e04-4887-9fb0-d1c88503b57d/images/789fabcc-8052-40d5-a746-986d799576e9.png)


## Tools
<a name="migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct-tools"></a>
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate databases into the AWS Cloud or between combinations of cloud and on-premises setups. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. You can use AWS DMS to migrate your data to and from the most widely used commercial and open-source databases. AWS DMS supports heterogeneous migrations between different database platforms, such as IBM Db2 to Aurora PostgreSQL-Compatible version 10.18 or higher. For details, see [Sources for Data Migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html) and [Targets for Data Migration](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.html) in the AWS DMS documentation.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format that's compatible with the target database. Any objects that are not automatically converted are clearly marked so that they can be manually converted to complete the migration. AWS SCT can also scan the application source code for embedded SQL statements and convert them. 

## Epics
<a name="migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct-epics"></a>

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Aurora PostgreSQL-Compatible DB instance. | To create the DB instance, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html). For engine type, choose **Amazon Aurora**. For edition, choose **Amazon Aurora PostgreSQL-Compatible Edition**.The Aurora PostgreSQL-Compatible version 10.18 or later DB instance should be in the same virtual private cloud (VPC) as the source IBM Db2 database. | Amazon RDS | 

### Convert your database schema
<a name="convert-your-database-schema"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install and verify AWS SCT. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct.html) | AWS administrator, DBA, Migration engineer | 
| Start AWS SCT and create a project. | To start the AWS SCT tool and create a new project to run a database migration assessment report, follow the instructions in the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html#CHAP_UserInterface.Launching). | Migration engineer | 
| Add database servers and create a mapping rule. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct.html) | Migration engineer | 
| Create a database migration assessment report.  | Create the database migration assessment report by following the steps in the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html#CHAP_UserInterface.AssessmentReport). | Migration engineer | 
| View the assessment report. | Use the **Summary** tab of the database migration assessment report to view the report and analyze the data. This analysis will help you determine the complexity of the migration. For more information, see the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_AssessmentReport.View.html). | Migration engineer | 
| Convert the schema. | To convert your source database schemas:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct.html)For more information, see the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html#CHAP_UserInterface.Converting). | Migration engineer | 
| Apply the converted database schema to the target DB instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct.html)For more information, see the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html#CHAP_UserInterface.ApplyingConversion). | Migration engineer | 

### Migrate your data
<a name="migrate-your-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up a VPC and DB parameter groups.  | Set up a VPC and DB parameter groups, and configure the inbound rules and parameters required for migration. For instructions, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.Prerequisites.html).For the VPC security group, select both the EC2 instance for Db2 and the Aurora PostgreSQL-Compatible DB instance. This replication instance must be in the same VPC as the source and target DB instances. | Migration engineer | 
| Prepare source and target DB instances. | Prepare the source and target DB instances for migration. In a production environment, the source database will already exist.For the source database, the server name must be the public Domain Name System (DNS) of the EC2 instance where Db2 is running. For user name, you can use `db2inst1` followed by the port, which will be 5000 for IBM Db2.  | Migration engineer | 
| Create an Amazon EC2 client and endpoints. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct.html) | Migration engineer | 
| Create a replication instance. | Create a replication instance by using the AWS DMS console and specify the source and target endpoints. The replication instance performs the data migration between the endpoints. For more information, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.Replication.html).  | Migration engineer | 
| Create an AWS DMS task to migrate the data. | Create a task to load the source IBM Db2 tables to the target PostgreSQL DB instance by following the steps in the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.Replication.html#CHAP_GettingStarted.Replication.Tasks).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct.html) | Migration engineer | 

## Related resources
<a name="migrate-from-ibm-db2-on-amazon-ec2-to-aurora-postgresql-compatible-using-aws-dms-and-aws-sct-resources"></a>

**References**
+ [Amazon Aurora documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html)
+ [PostgreSQL foreign data wrapper (FDW) documentation](https://www.postgresql.org/docs/10/postgres-fdw.html) 
+ [PostgreSQL IMPORT FOREIGN SCHEMA documentation ](https://www.postgresql.org/docs/10/sql-importforeignschema.html) 
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/index.html)  
+ [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) 

**Tutorials and videos**
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/) (walkthrough)
+ [Introduction to Amazon EC2 - Elastic Cloud Server & Hosting with AWS](https://www.youtube.com/watch?v=TsRBftzZsQo) (video)

# Migrate from Oracle 8i or 9i to Amazon RDS for PostgreSQL using SharePlex and AWS DMS
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms"></a>

*Kumar Babu P G, Amazon Web Services*

## Summary
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms-summary"></a>

This pattern describes how to migrate an on-premises Oracle 8i or 9i database to Amazon Relational Database Service (Amazon RDS) for PostgreSQL or Amazon Aurora PostgreSQL. AWS Database Migration Service (AWS DMS) doesn't support Oracle 8i or 9i as a source, so Quest SharePlex replicates data from an on-premises 8i or 9i database to an intermediate Oracle database (Oracle 10g or 11g), which is compatible with AWS DMS.

From the intermediate Oracle instance, the schema and data are migrated to the PostgreSQL database on AWS by using AWS Schema Conversion Tool (AWS SCT) and AWS DMS. This method helps achieve continuous streaming of data from the source Oracle database to the target PostgreSQL DB instance with minimum replication lag. In this implementation, the downtime is limited to the length of time it takes to create or validate all the foreign keys, triggers, and sequences on the target PostgreSQL database.

The migration uses an Amazon Elastic Compute Cloud (Amazon EC2) instance with Oracle 10g or 11g installed to host the changes from the source Oracle database. AWS DMS uses this intermediate Oracle instance as the source to stream the data to Amazon RDS for PostgreSQL or Aurora PostgreSQL. Data replication can be paused and resumed from the on-premises Oracle database to the intermediate Oracle instance. It can also be paused and resumed from the intermediate Oracle instance to the target PostgreSQL database so you can validate the data by using either AWS DMS data validation or a custom data validation tool.

## Prerequisites and limitations
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A source Oracle 8i or 9i database in an on-premises data center 
+ AWS Direct Connect configured between the on-premises data center and AWS 
+ Java Database Connectivity (JDBC) drivers for AWS SCT connectors installed either on a local machine or on the EC2 instance where AWS SCT is installed
+ Familiarity with [using an Oracle database as an AWS DMS source](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ Familiarity with [using a PostgreSQL database as an AWS DMS target](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)
+ Familiarity with Quest SharePlex data replication

 

**Limitations**
+ The database size limit is 64 TB
+ The on-premises Oracle database must be Enterprise Edition

 

**Product versions**
+ Oracle 8i or 9i for the source database
+ Oracle 10g or 11g for the intermediate database 
+ PostgreSQL 9.6 or later

## Architecture
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms-architecture"></a>

**Source technology stack  **
+ Oracle 8i or 9i database 
+ Quest SharePlex 

 

**Target technology stack  **
+ Amazon RDS for PostgreSQL or Aurora PostgreSQL 

** **

**Source and target architecture**

![\[Architecture diagram showing migration from on-premises Oracle database to AWS cloud using various services.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b6c30668-fc2e-4293-a59a-e01fd151f4bb/images/25082670-0bf3-4b20-8c80-99c6633b046f.png)


## Tools
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms-tools"></a>
+ **AWS DMS** – [AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html) (AWS DMS) helps you migrate databases quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate your data to and from the most widely used commercial and open-source databases. 
+ **AWS SCT** – [AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) (AWS SCT) makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database. Objects that cannot be automatically converted are clearly marked so that they can be manually converted to complete the migration. AWS SCT can also scan your application source code for embedded SQL statements and convert them as part of a database schema conversion project. During this process, AWS SCT performs cloud-native code optimization by converting legacy Oracle and SQL Server functions to their AWS equivalents, to help you modernize your applications while migrating your databases. When schema conversion is complete, AWS SCT can help migrate data from a range of data warehouses to Amazon Redshift by using built-in data migration agents.
+ **Quest SharePlex** – [Quest SharePlex](https://www.quest.com/register/120420/?gclid=Cj0KCQiA6IHwBRCJARIsALNjViVSt9fHqAsf9XbWkoCwKKyQqollR_5kSxNhBagh9s3spQT4IQCaVy0aAmCnEALw_wcB) is an Oracle-to-Oracle data replication tool for moving data with minimal downtime and no data loss.

## Epics
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms-epics"></a>

### Create the EC2 instance and install Oracle
<a name="create-the-ec2-instance-and-install-oracle"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the network for Amazon EC2. | Create the virtual private cloud (VPC), subnets, internet gateway, route tables, and security groups. | AWS SysAdmin | 
| Create the new EC2 instance. | Select the Amazon Machine Image (AMI) for the EC2 instance. Choose the instance size and configure instance details: the number of instances (1), the VPC and subnet from the previous step, auto-assign public IP, and other options. Add storage, configure security groups, and launch the instance. When prompted, create and save a key pair for the next step. | AWS SysAdmin | 
| Install Oracle on the EC2 instance. | Acquire the licenses and the required Oracle binaries, and install Oracle 10g or 11g on the EC2 instance. | DBA | 

### Set up SharePlex on an EC2 instance and configure data replication
<a name="set-up-shareplex-on-an-ec2-instance-and-configure-data-replication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up SharePlex. | Create an Amazon EC2 instance and install the SharePlex binaries that are compatible with Oracle 8i or 9i. | AWS SysAdmin, DBA | 
| Configure data replication. | Follow SharePlex best practices to configure data replication from an on-premises Oracle 8i/9i database to an Oracle 10g/11g instance. | DBA | 

### Convert the Oracle database schema to PostgreSQL
<a name="convert-the-oracle-database-schema-to-postgresql"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up AWS SCT. | Create a new report, and then connect to Oracle as the source and PostgreSQL as the target. In project settings, open the SQL Scripting tab and change the target SQL script to Multiple Files. | DBA | 
| Convert the Oracle database schema. | In the Action tab, choose Generate Report, Convert Schema, and then Save as SQL. | DBA | 
| Modify the SQL scripts generated by AWS SCT. |  | DBA | 

### Create and configure the Amazon RDS DB instance
<a name="create-and-configure-the-amazon-rds-db-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Amazon RDS DB instance. | In the Amazon RDS console, create a new PostgreSQL DB instance. | AWS SysAdmin, DBA | 
| Configure the DB instance. | Specify the DB engine version, DB instance class, Multi-AZ deployment, storage type, and allocated storage. Enter the DB instance identifier, a master user name, and a master password. | AWS SysAdmin, DBA | 
| Configure network and security. | Specify the VPC, subnet group, public accessibility, Availability Zone preference, and security groups. | AWS SysAdmin, DBA | 
| Configure database options. | Specify the database name, port, parameter group, encryption, and master key. | AWS SysAdmin, DBA | 
| Configure backups. | Specify the backup retention period, backup window, start time, duration, and whether to copy tags to snapshots. | AWS SysAdmin, DBA | 
| Configure monitoring options. | Enable or disable enhanced monitoring and performance insights. | AWS SysAdmin, DBA | 
| Configure maintenance options. | Specify auto minor version upgrade, maintenance window, and the start day, time, and duration. | AWS SysAdmin, DBA | 
| Run the pre-migration scripts from AWS SCT. | On the Amazon RDS instance, run these scripts: create\$1database.sql, create\$1sequence.sql, create\$1table.sql, create\$1view.sql, and create\$1function.sql. | AWS SysAdmin, DBA | 

### Migrate data by using AWS DMS
<a name="migrate-data-by-using-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a replication instance in AWS DMS. | Complete the fields for the name, instance class, VPC (same as for the EC2 instance), Multi-AZ, and public accessibility. In the advanced configuration section, specify allocated storage, subnet group, Availability Zone, VPC security groups, and AWS Key Management Service (AWS KMS) root key. | AWS SysAdmin, DBA | 
| Create the source database endpoint. | Specify the endpoint name, type, source engine (Oracle), server name (Amazon EC2 private DNS name), port, SSL mode, user name, password, SID, VPC (specify the VPC that has the replication instance), and replication instance. To test the connection, choose Run Test, and then create the endpoint. You can also configure the following advanced settings: maxFileSize and numberDataTypeScale. | AWS SysAdmin, DBA | 
| Create the AWS DMS replication task. | Specify the task name, replication instance, source and target endpoints, and replication instance. For migration type, choose "Migrate existing data and replicate ongoing changes." Clear the "Start task on create" check box. | AWS SysAdmin, DBA | 
| Configure the AWS DMS replication task settings. | For target table preparation mode, choose "Do nothing." Stop the task after the full load completes to create primary keys. Specify limited or full LOB mode, and enable control tables. Optionally, you can configure the CommitRate advanced setting. | DBA | 
| Configure the table mappings. | In the table mappings section, create an Include rule for all tables in all schemas included in the migration, and then create an Exclude rule. Add three transformation rules to convert the schema, table, and column names to lowercase, and add any other rules needed for this specific migration. | DBA | 
| Start the task. | Start the replication task. Make sure that the full load is running. Run ALTER SYSTEM SWITCH LOGFILE on the primary Oracle database to kick-start the task. | DBA | 
| Run the mid-migration scripts from AWS SCT. | In Amazon RDS for PostgreSQL, run these scripts: create\$1index.sql and create\$1constraint.sql. | DBA | 
| Restart the task to continue change data capture (CDC). | In the Amazon RDS for PostgreSQL DB instance, run VACUUM, and restart the AWS DMS task to apply the cached CDC changes. | DBA | 

### Cut over to the PostgreSQL database
<a name="cut-over-to-the-postgresql-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check the AWS DMS logs and metadata tables. | Validate any errors and fix if required. | DBA | 
| Stop all Oracle dependencies. | Shut down listeners on the Oracle database and run ALTER SYSTEM SWITCH LOGFILE. Stop the AWS DMS task when it shows no activity. | DBA | 
| Run the post-migration scripts from AWS SCT. | In Amazon RDS for PostgreSQL, run these scripts: create\$1foreign\$1key\$1constraint.sql and create\$1triggers.sql. | DBA | 
| Complete any additional Amazon RDS for PostgreSQL steps. | Increment sequences to match Oracle if needed, run VACUUM and ANALYZE, and take a snapshot for compliance. | DBA | 
| Open the connections to Amazon RDS for PostgreSQL. | Remove the AWS DMS security groups from Amazon RDS for PostgreSQL, add production security groups, and point your applications to the new database. | DBA | 
| Clean up AWS DMS resources. | Remove the endpoints, replication tasks, replication instances, and the EC2 instance. | SysAdmin, DBA | 

## Related resources
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-shareplex-and-aws-dms-resources"></a>
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html)
+ [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
+ [Amazon RDS for PostgreSQL pricing](https://aws.amazon.com/rds/postgresql/pricing/)
+ [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ [Using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html) 
+ [Quest SharePlex documentation](https://support.quest.com/shareplex/9.0.2/technical-documents)

# Migrate from Oracle 8i or 9i to Amazon RDS for PostgreSQL using materialized views and AWS DMS
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms"></a>

*Kumar Babu P G and Pragnesh Patel, Amazon Web Services*

## Summary
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms-summary"></a>

This pattern describes how to migrate an on-premises legacy Oracle 8i or 9i database to Amazon Relational Database Service (Amazon RDS) for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition. 

AWS Database Migration Service (AWS DMS) doesn't support Oracle 8i or 9i as a source, so this pattern uses an intermediate Oracle database instance that's compatible with AWS DMS, such as Oracle 10g or 11g. It also uses the materialized views feature to migrate data from the source Oracle 8i/9i instance to the intermediate Oracle 10g/11g instance.

AWS Schema Conversion Tool (AWS SCT) converts the database schema, and AWS DMS migrates the data to the target PostgreSQL database. 

This pattern helps users who want to migrate from legacy Oracle databases with minimum database downtime. In this implementation, the downtime would be limited to the length of time it takes to create or validate all the foreign keys, triggers, and sequences on the target database. 

The pattern uses Amazon Elastic Compute Cloud (Amazon EC2) instances with an Oracle 10g/11g database installed to help AWS DMS stream the data. You can temporarily pause streaming replication from the on-premises Oracle database to the intermediate Oracle instance to enable AWS DMS to catch up on data validation or to use another data validation tool. The PostgreSQL DB instance and intermediate Oracle database will have the same data when AWS DMS has finished migrating current changes.

## Prerequisites and limitations
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ A source Oracle 8i or 9i database in an on-premises data center 
+ AWS Direct Connect configured between the on-premises data center and AWS
+ Java Database Connectivity (JDBC) drivers for AWS SCT connectors installed either on a local machine or on the EC2 instance where AWS SCT is installed
+ Familiarity with [using an Oracle database as an AWS DMS source](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ Familiarity with [using a PostgreSQL database as an AWS DMS target](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)

**Limitations **
+ The database size limit is 64 TB

**Product versions**
+ Oracle 8i or 9i for the source database
+ Oracle 10g or 11g for the intermediate database
+ PostgreSQL 10.17 or later

## Architecture
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms-architecture"></a>

**Source technology stack  **
+ Oracle 8i or 9i database 

**Target technology stack  **
+ Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible

**Target architecture**

![\[Architecture for migrating from a legacy Oracle database to Amazon RDS or Aurora\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8add9b21-1b62-46a2-bb8e-0350f36a924a/images/f34f9b0f-f1da-4c27-a385-71b12d16c375.png)


## Tools
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms-tools"></a>
+ [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html) helps migrate databases quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate your data to and from the most widely used commercial and open-source databases. 
+ [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database. Objects that cannot be automatically converted are clearly marked so that they can be manually converted to complete the migration. AWS SCT can also scan your application source code for embedded SQL statements and convert them as part of a database schema conversion project. During this process, AWS SCT performs cloud-native code optimization by converting legacy Oracle and SQL Server functions to their AWS equivalents, to help you modernize your applications while migrating your databases. When schema conversion is complete, AWS SCT can help migrate data from a range of data warehouses to Amazon Redshift by using built-in data migration agents.  

## Best practices
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms-best-practices"></a>

For best practices for refreshing materialized views, see the following Oracle documentation:
+ [Refreshing materialized views](https://docs.oracle.com/database/121/DWHSG/refresh.htm#DWHSG-GUID-64068234-BDB0-4C12-AE70-75571046A586)
+ [Fast refresh for materialized views](https://docs.oracle.com/database/121/DWHSG/refresh.htm#DWHSG8361)

## Epics
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms-epics"></a>

### Install Oracle on an EC2 instance and create materialized views
<a name="install-oracle-on-an-ec2-instance-and-create-materialized-views"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the network for the EC2 instance. | Create the virtual private cloud (VPC), subnets, internet gateway, route tables, and security groups. | AWS SysAdmin | 
| Create the EC2 instance. | Select the Amazon Machine Image (AMI) for the EC2 instance. Choose the instance size and configure instance details: the number of instances (1), the VPC and subnet from the previous step, auto-assign public IP, and other options. Add storage, configure security groups, and launch the instance. When prompted, create and save a key pair for the next step. | AWS SysAdmin | 
| Install Oracle on the EC2 instance. | Acquire the licenses and the required Oracle binaries, and install Oracle 10g or 11g on the EC2 instance. | DBA | 
| Configure Oracle networking. | Modify or add entries in `listener.ora` to connect to the on-premises source Oracle 8i/9i database, and then create the database links. | DBA | 
| Create materialized views. | Identify the database objects to replicate in the source Oracle 8i/9i database, and then create materialized views for all the objects by using the database link. | DBA | 
| Deploy scripts to refresh materialized views at required intervals. | Develop and deploy scripts to refresh materialized views at required intervals on the Amazon EC2 Oracle 10g/11g instance. Use the incremental refresh option to refresh materialized views. | DBA | 

### Convert the Oracle database schema to PostgreSQL
<a name="convert-the-oracle-database-schema-to-postgresql"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up AWS SCT. | Create a new report, and then connect to Oracle as the source and PostgreSQL as the target. In project settings, open the **SQL Scripting** tab. Change the target SQL script to **Multiple Files**. (AWS SCT doesn't support Oracle 8i/9i databases, so you have to restore the schema-only dump on the intermediate Oracle 10g/11g instance and use it as a source for AWS SCT.) | DBA | 
| Convert the Oracle database schema. | On the **Action** tab, choose **Generate Report**, **Convert Schema**, and then **Save as SQL**. | DBA | 
| Modify the SQL scripts. | Make modifications based on best practices. For example, switch to suitable data types and develop PostgreSQL equivalents for Oracle-specific functions. | DBA, DevDBA | 

### Create and configure the Amazon RDS DB instance to host the converted database
<a name="create-and-configure-the-amazon-rds-db-instance-to-host-the-converted-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Amazon RDS DB instance. | In the Amazon RDS console, create a new PostgreSQL DB instance. | AWS SysAdmin, DBA | 
| Configure the DB instance. | Specify the DB engine version, DB instance class, Multi-AZ deployment, storage type, and allocated storage. Enter the DB instance identifier, a master user name, and a master password. | AWS SysAdmin, DBA | 
| Configure network and security. | Specify the VPC, subnet group, public accessibility, Availability Zone preference, and security groups. | DBA, SysAdmin | 
| Configure database options. | Specify the database name, port, parameter group, encryption, and master key. | DBA, AWS SysAdmin | 
| Configure backups. | Specify the backup retention period, backup window, start time, duration, and whether to copy tags to snapshots. | AWS SysAdmin, DBA | 
| Configure monitoring options. | Enable or disable enhanced monitoring and performance insights. | AWS SysAdmin, DBA | 
| Configure maintenance options. | Specify auto minor version upgrade, maintenance window, and the start day, time, and duration. | AWS SysAdmin, DBA | 
| Run the pre-migration scripts from AWS SCT. | On the target Amazon RDS for PostgreSQL instance, create the database schema by using the SQL scripts from AWS SCT with other modifications. These might include running multiple scripts and including user creation, database creation, schema creation, tables, views, functions, and other code objects. | AWS SysAdmin, DBA | 

### Migrate data by using AWS DMS
<a name="migrate-data-by-using-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a replication instance in AWS DMS. | Complete the fields for the name, instance class, VPC (same as for the EC2 instance), Multi-AZ, and public accessibility. In the advanced configuration section, specify allocated storage, subnet group, Availability Zone, VPC security groups, and AWS Key Management Service (AWS KMS) key. | AWS SysAdmin, DBA | 
| Create the source database endpoint. | Specify the endpoint name, type, source engine (Oracle), server name (the EC2 instance's private DNS name), port, SSL mode, user name, password, SID, VPC (specify the VPC that has the replication instance), and replication instance. To test the connection, choose **Run Test**, and then create the endpoint. You can also configure the following advanced settings: **maxFileSize** and **numberDataTypeScale**. | AWS SysAdmin, DBA | 
| Connect AWS DMS to Amazon RDS for PostgreSQL. | Create a migration security group for connections across VPCs, if your PostgreSQL database is in another VPC. | AWS SysAdmin, DBA | 
| Create the target database endpoint. | Specify the endpoint name, type, source engine (PostgreSQL), server name (Amazon RDS endpoint), port, SSL mode, user name, password, database name, VPC (specify the VPC that has the replication instance), and replication instance. To test the connection, choose **Run Test**, and then create the endpoint. You can also configure the following advanced settings: **maxFileSize** and **numberDataTypeScale**. | AWS SysAdmin, DBA | 
| Create the AWS DMS replication task. | Specify the task name, replication instance, source and target endpoints, and replication instance. For migration type, choose **Migrate existing data and replicate ongoing changes**. Clear the **Start task on create** check box. | AWS SysAdmin, DBA | 
| Configure the AWS DMS replication task settings. | For target table preparation mode, choose **Do nothing**. Stop the task after full load completes (to create primary keys). Specify limited or full LOB mode, and enable control tables. Optionally, you can configure the **CommitRate** advanced setting. | DBA | 
| Configure the table mappings. | In the **Table mappings** section, create an Include rule for all tables in all schemas included in the migration, and then create an Exclude rule. Add three transformation rules to convert the schema, table, and column names to lowercase, and add any other rules you need for this specific migration. | DBA | 
| Start the task. | Start the replication task. Make sure that the full load is running. Run `ALTER SYSTEM SWITCH LOGFILE` on the primary Oracle database to start the task. | DBA | 
| Run the mid-migration scripts from AWS SCT. | In Amazon RDS for PostgreSQL, run the following scripts: `create_index.sql` and `create_constraint.sql` (if the complete schema wasn't initially created). | DBA | 
| Resume the task to continue change data capture (CDC). | Run `VACUUM` on the Amazon RDS for PostgreSQL DB instance, and restart the AWS DMS task to apply cached CDC changes. | DBA | 

### Cut over to the PostgreSQL database
<a name="cut-over-to-the-postgresql-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check the AWS DMS logs and validation tables. | Check and fix any replication or validation errors. | DBA | 
| Stop using the on-premises Oracle database and its dependencies. | Stop all Oracle dependencies, shut down listeners on the Oracle database, and run `ALTER SYSTEM SWITCH LOGFILE`. Stop the AWS DMS task when it shows no activity. | DBA | 
| Run the post-migration scripts from AWS SCT. | In Amazon RDS for PostgreSQL, run these scripts: `create_foreign_key_constraint.sql and create_triggers.sql`. Make sure that the sequences are up to date. | DBA | 
| Complete additional Amazon RDS for PostgreSQL steps. | Increment sequences to match Oracle if needed, run `VACUUM` and `ANALYZE`, and take a snapshot for compliance. | DBA | 
| Open the connections to Amazon RDS for PostgreSQL. | Remove the AWS DMS security groups from Amazon RDS for PostgreSQL, add production security groups, and point your applications to the new database. | DBA | 
| Clean up the AWS DMS objects. | Remove the endpoints, replication tasks, replication instances, and the EC2 instance. | SysAdmin, DBA | 

## Related resources
<a name="migrate-from-oracle-8i-or-9i-to-amazon-rds-for-postgresql-using-materialized-views-and-aws-dms-resources"></a>
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html)
+ [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
+ [Amazon RDS for PostgreSQL pricing](https://aws.amazon.com/rds/postgresql/pricing/)
+ [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ [Using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)

# Migrate from Oracle on Amazon EC2 to Amazon RDS for MySQL using AWS DMS and AWS SCT
<a name="migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct"></a>

*Anil Kunapareddy, Amazon Web Services*

*Harshad Gohil, None*

## Summary
<a name="migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-summary"></a>

Managing Oracle databases on Amazon Elastic Compute Cloud (Amazon EC2) instances requires resources and can be costly. Moving these databases to an Amazon Relational Database Service (Amazon RDS) for MySQL DB instance will ease your job by optimizing the overall IT budget. Amazon RDS for MySQL also provides features like Multi-AZ, scalability, and automatic backups. 

This pattern walks you through the migration of a source Oracle database on Amazon EC2 to a target Amazon RDS for MySQL DB instance. It uses AWS Database Migration Service (AWS DMS) to migrate the data, and AWS Schema Conversion Tool (AWS SCT) to convert the source database schema and objects to a format that's compatible with Amazon RDS for MySQL. 

## Prerequisites and limitations
<a name="migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A source database with instance and listener services running, in ARCHIVELOG mode
+ A target Amazon RDS for MySQL database, with sufficient storage for data migration

**Limitations**
+ AWS DMS does not create a schema on the target database; you have to do that. The schema name must already exist for the target. Tables from the source schema are imported to user/schema, which AWS DMS uses to connect to the target instance. You must create multiple replication tasks if you have to migrate multiple schemas. 

**Product versions**
+ All Oracle database editions for versions 10.2 and later, 11g and up to 12.2, and 18c. For the latest list of supported versions, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) and [Using a MySQL-Compatible Database as a Target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html). We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support. For information about Oracle database versions supported by AWS SCT, see the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html).
+ AWS DMS supports versions 5.5, 5.6, and 5.7 of MySQL. 

## Architecture
<a name="migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-architecture"></a>

**Source technology stack**
+ An Oracle database on an EC2 instance  

**Target technology stack**
+ Amazon RDS for MySQL DB instance

**Data migration architecture**

![\[Using AWS DMS to migrate from Oracle on Amazon EC2 to Amazon RDS for MySQL\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8a8e346e-7944-4999-bc11-208efead3792/images/c00f908c-f348-41dd-a31c-3931b990777a.png)


**Source and target architecture  **

![\[Using AWS DMS and AWS SCT to migrate from Oracle on Amazon EC2 to Amazon RDS for MySQL\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/8a8e346e-7944-4999-bc11-208efead3792/images/e7ba7ac0-3094-4142-b355-fb192e242432.png)


## Tools
<a name="migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-tools"></a>
+ **AWS DMS** - [AWS Database Migration Service](https://docs.aws.amazon.com/dms/) (AWS DMS) is a web service you can use to migrate data from your database that is on-premises, on an Amazon RDS DB instance, or in a database on an EC2 instance, to a database on an AWS service such as Amazon RDS for MySQL or an EC2 instance. You can also migrate a database from an AWS service to an on-premises database. You can migrate data between heterogeneous or homogeneous database engines.
+ **AWS SCT** - [AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) (AWS SCT) makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format that's compatible with the target database. After converting your database schema and code objects using AWS SCT, you can use AWS DMS to migrate data from the source database to the target database to complete your migration projects.

## Epics
<a name="migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-epics"></a>

### Plan the migration
<a name="plan-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify the source and target database versions and engines. |  | DBA/Developer | 
| Identify the DMS replication instance. |  | DBA/Developer | 
| Identify storage requirements such as storage type and capacity. |  | DBA/Developer | 
| Identify network requirements such as latency and bandwidth. |  |  DBA/Developer | 
| Identify hardware requirements for the source and target server instances (based on Oracle compatibility list and capacity requirements). |  | DBA/Developer | 
| Identify network access security requirements for source and target databases. |  | DBA/Developer | 
| Install AWS SCT and Oracle drivers. |  | DBA/Developer | 
| Determine a backup strategy. |  | DBA/Developer | 
| Determine availability requirements. |  | DBA/Developer | 
| Identify application migration and switch-over strategy. |  | DBA/Developer | 
| Select the proper DB instance type based on capacity, storage, and network features. |  | DBA/Developer | 

### Configure the environment
<a name="configure-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC). The source, target, and replication instance should be in the same VPC. It is also good to have these in the same Availability Zone. |  | Developer | 
| Create the necessary security groups for database access. |  |  Developer | 
| Generate and configure a key pair. |  | Developer | 
| Configure subnets, Availability Zones, and CIDR blocks. |  | Developer | 

### Configure the source: Oracle database on EC2 instance
<a name="configure-the-source-oracle-database-on-ec2-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install Oracle Database on Amazon EC2 with required users and roles. |  | DBA | 
|  Perform the three steps in the next column to access Oracle from outside the EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct.html) | DBA | 
| When Amazon EC2 is restarted, the public DNS changes. Make sure to update Amazon EC2 public DNS in 'tnsnames' and 'listener' or use an Elastic IP address. |  | DBA/Developer | 
| Configure the EC2 instance security group so that the replication instance and required clients can access the source database. |  | DBA/Developer | 

### Configure the target: Amazon RDS for MySQL
<a name="configure-the-target-amazon-rds-for-mysql"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure and start the Amazon RDS for MySQL DB instance. |  | Developer | 
| Create the necessary tablespace in the Amazon RDS for MySQL DB instance. |  | DBA | 
| Configure the security group so that the replication instance and required clients can access the target database. |  | Developer | 

### Configure AWS SCT and create a schema in the target database
<a name="configure-aws-sct-and-create-a-schema-in-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install AWS SCT and Oracle drivers. |  | Developer | 
| Enter the appropriate parameters and connect to the source and target. |  | Developer | 
| Generate a schema conversion report. |  | Developer | 
| Correct the code and schema as necessary, especially tablespaces and quotes, and run on the target database. |  |  Developer | 
| Validate the schema on source vs. target before migrating data. |  | Developer | 

### Migrate data using AWS DMS
<a name="migrate-data-using-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| For full-load and change data capture (CDC) or just CDC, you must set an extra connection attribute. |  | Developer | 
| The user specified in the AWS DMS source Oracle database definitions must be granted all the required privileges. For a complete list, see https://docs.aws.amazon.com/dms/latest/userguide/CHAP\$1Source.Oracle.html\$1CHAP\$1Source.Oracle.Self-Managed. |  | DBA/Developer | 
| Enable supplemental logging in the source database. |  | DBA/Developer | 
| For full-load and change data capture (CDC) or just CDC, enable ARCHIVELOG mode in the source database. |  | DBA | 
| Create source and target endpoints, and test the connections. |  | Developer | 
| When the endpoints are connected successfully, create a replication task. |  | Developer | 
| Select CDC only (or) full load plus CDC in the task to capture changes for continuous replication only (or) full load plus ongoing changes, respectively. |  | Developer | 
| Run the replication task and monitor Amazon CloudWatch logs. |  |  Developer | 
| Validate the data in the source and target databases. |  | Developer | 

### Migrate your application and cut over
<a name="migrate-your-application-and-cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the steps for your application migration strategy. |  | DBA, Developer, App owner | 
| Follow the steps for your application cutover/switch-over strategy. |  | DBA, Developer, App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the schema and data in source vs. target databases. |  | DBA/Developer | 
| Gather metrics around time to migrate, percent of manual vs. tool, cost savings, etc. |  |  DBA/Developer/AppOwner | 
| Review the project documents and artifacts. |  | DBA/Developer/AppOwner | 
| Shut down temporary AWS resources. |  | DBA/Developer | 
| Close out the project and provide feedback. |  | DBA/Developer/AppOwner | 

## Related resources
<a name="migrate-from-oracle-on-amazon-ec2-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-resources"></a>
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) 
+ [AWS DMS website](https://aws.amazon.com/dms/)
+ [AWS DMS blog posts](https://aws.amazon.com/blogs/database/tag/dms/) 
+ [Strategies for Migrating Oracle Database to AWS](https://d1.awsstatic.com/whitepapers/strategies-for-migrating-oracle-database-to-aws.pdf) 
+ [Amazon RDS for Oracle FAQs](https://aws.amazon.com/rds/oracle/faqs/) 
+ [Oracle FAQ](https://aws.amazon.com/oracle/faq/) 
+ [Amazon EC2](https://aws.amazon.com/ec2/) 
+ [Amazon EC2 FAQs](https://aws.amazon.com/ec2/faqs/)
+ [Licensing Oracle Software in the Cloud Computing Environment](http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf)

# Migrate an Oracle database from Amazon EC2 to Amazon RDS for MariaDB using AWS DMS and AWS SCT
<a name="migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct"></a>

*Veeranjaneyulu Grandhi and vinod kumar, Amazon Web Services*

## Summary
<a name="migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct-summary"></a>

This pattern walks you through the steps for migrating an Oracle database on an Amazon Elastic Compute Cloud (Amazon EC2) instance to an Amazon Relational Database Service (Amazon RDS) for MariaDB DB instance. The pattern uses AWS Data Migration Service (AWS DMS) for data migration and AWS Schema Conversion Tool (AWS SCT) for schema conversion. 

Managing Oracle databases on EC2 instances requires more resources and is more costly than using a database on Amazon RDS. Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.

## Prerequisites and limitations
<a name="migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ A source Oracle database with instance and listener services up and running. This database should be in ARCHIVELOG mode.
+ Familiarity with [Using an Oracle Database as a Source for AWS DMS.](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ Familiarity with [Using Oracle as a Source for AWS SCT.](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html)

**Limitations**
+ Database size limit: 64 TB 

**Product versions**
+ All Oracle database editions for versions 10.2 and later, 11g and up to 12.2, and 18c. For the latest list of supported versions, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) and the [AWS SCT version table](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) in the AWS documentation.
+ Amazon RDS supports MariaDB Server Community Server versions 10.3, 10.4, 10.5, and 10.6. For the latest list of supported versions, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MariaDB.html).

## Architecture
<a name="migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct-architecture"></a>

**Source technology stack**
+ An Oracle database on an EC2 instance

**Target technology stack**
+ Amazon RDS for MariaDB

**Data migration architecture**

![\[Using AWS DMS for the migration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0b4269c6-8ea3-4672-ad14-1ffac1dc14f3/images/ed191145-e5c2-4d61-8827-31f081450c03.png)


**Target architecture**

![\[Using AWS SCT for the migration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0b4269c6-8ea3-4672-ad14-1ffac1dc14f3/images/0171f548-37dd-4110-851c-7e74dfff3732.png)


## Tools
<a name="migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct-tools"></a>
+ [AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) (AWS SCT) makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects—including views, stored procedures, and functions—to a format compatible with the target database. After converting your database schema and code objects using AWS SCT, you can use AWS DMS to migrate data from the source database to the target database to complete your migration projects. For more information, see [Using Oracle as a Source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html) in the AWS SCT documentation.
+ [AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate your data to and from the most widely used commercial and open-source databases. AWS DMS supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle or Microsoft SQL Server to Amazon Aurora. To learn more about migrating Oracle databases, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) in the AWS DMS documentation.

## Epics
<a name="migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct-epics"></a>

### Plan for the migration
<a name="plan-for-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Identify versions and database engines. | Identify the source and target database versions and engines. | DBA, Developer | 
| Identify the replication instance. | Identify the AWS DMS replication instance. | DBA, Developer | 
| Identify storage requirements. | Identify storage type and capacity. | DBA, Developer | 
| Identify network requirements. | Identify network latency and bandwidth. | DBA, Developer | 
| Identify hardware requirements. | Identify hardware requirements for the source and target server instances (based on the Oracle compatibility list and capacity requirements). | DBA, Developer | 
| Identify security requirements. | Identify network-access security requirements for the source and target databases. | DBA, Developer | 
| Install drivers. | Install the latest AWS SCT and Oracle drivers. | DBA, Developer | 
| Determine a backup strategy. |  | DBA, Developer | 
| Determine availability requirements. |  | DBA, Developer | 
| Choose an application migration/switchover strategy. |  | DBA, Developer | 
| Select the instance type. | Select the proper instance type based on capacity, storage, and network features. | DBA, Developer | 

### Configure the environment
<a name="configure-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC).  | The source, target, and replication instances should be in the same VPC and in the same Availability Zone (recommended). | Developer | 
| Create security groups. | Create the necessary security groups for database access. | Developer | 
| Generate a key pair. | Generate and configure a key pair. | Developer | 
| Configure other resources. | Configure subnets, Availability Zones, and CIDR blocks. | Developer | 

### Configure the source
<a name="configure-the-source"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Launch the EC2 instance. | For instructions, see the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html). | Developer | 
| Install the Oracle database. | Install the Oracle database on the EC2 instance, with required users and roles. | DBA | 
| Follow the steps in the task description to access Oracle from outside of the EC2 instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct.html) | DBA | 
| Update the Amazon EC2 public DNS. | After the EC2 instance restarts, the public DNS changes. Make sure to update the Amazon EC2 public DNS in `tnsnames` and `listener`, or use an Elastic IP address. | DBA, Developer | 
| Configure the EC2 instance security group. | Configure the EC2 instance security group so the replication instance and required clients can access the source database. | DBA, Developer | 

### Configure the target Amazon RDS for MariaDB environment
<a name="configure-the-target-amazon-rds-for-mariadb-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Start the RDS DB instance. | Configure and start the Amazon RDS for MariaDB DB instance. | Developer | 
| Create tablespaces. | Create any necessary tablespaces in the Amazon RDS MariaDB database. | DBA | 
| Configure a security group. | Configure a security group so the replication instance and required clients can access the target database. | Developer | 

### Configure AWS SCT
<a name="configure-aws-sct"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install drivers. | Install the latest AWS SCT and Oracle drivers. | Developer | 
| Connect. | Enter appropriate parameters and then connect to the source and target. | Developer | 
| Generate a schema conversion report. | Generate an AWS SCT schema conversion report. | Developer | 
| Correct the code and schema as necessary. | Make any necessary corrections to the code and schema (especially tablespaces and quotation marks). | DBA, Developer | 
| Validate the schema. | Validate the schema on the source versus the target before loading data. | Developer | 

### Migrate data using AWS DMS
<a name="migrate-data-using-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set a connection attribute. | For full-load and change data capture (CDC) or just for CDC, set an extra connection attribute. For more information, see the [Amazon RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MariaDB.html). | Developer | 
| Enable supplemental logging. | Enable supplemental logging on the source database. | DBA, Developer | 
| Enable archive log mode. | For full-load and CDC (or just for CDC), enable archive log mode on the source database. | DBA | 
| Create and test endpoints. | Create source and target endpoints and test the connections. For more information, see the [Amazon DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.Creating.html). | Developer | 
| Create a replication task. | When the endpoints are connected successfully, create a replication task. For more information, see the [Amazon DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html). | Developer | 
| Choose replication type. | Choose **CDC only** or **Full load plus CDC** in the task to capture changes for continuous replication only, or for full load and ongoing changes, respectively. | Developer | 
| Start and monitor the task. | Start the replication task and monitor Amazon CloudWatch logs. For more information, see the [Amazon DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html). | Developer | 
| Validate the data. | Validate the data in the source and target databases. | Developer | 

### Migrate applications and cut over to the target database
<a name="migrate-applications-and-cut-over-to-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the chosen application migration strategy. |  | DBA, App owner, Developer | 
| Follow the chosen application cutover/switchover strategy. |  | DBA, App owner, Developer | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the schema and data. | Ensure that the schema and data are validated successfully in the source versus the target before project closure. | DBA, Developer | 
| Gather metrics. | Gather metrics for time to migrate, percentage of manual versus tool tasks, cost savings, and similar criteria. | DBA, App owner, Developer | 
| Review documentation. | Review the project documents and artifacts. | DBA, App owner, Developer | 
| Shut down resources. | Shut down temporary AWS resources. | DBA, Developer | 
| Close the project. | Close the migration project and provide any feedback. | DBA, App owner, Developer | 

## Related resources
<a name="migrate-an-oracle-database-from-amazon-ec2-to-amazon-rds-for-mariadb-using-aws-dms-and-aws-sct-resources"></a>
+ [MariaDB Amazon RDS overview](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MariaDB.html)
+ [Amazon RDS for MariaDB product details](https://aws.amazon.com/rds/mariadb/features)
+ [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ [Strategies for Migrating Oracle Databases to AWS](https://docs.aws.amazon.com/whitepapers/latest/strategies-migrating-oracle-db-to-aws/strategies-migrating-oracle-db-to-aws.html)
+ [Licensing Oracle Software in the Cloud Computing Environment](http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf)
+ [Amazon RDS for Oracle FAQs](https://aws.amazon.com/rds/oracle/faqs/)
+ [AWS DMS overview](https://aws.amazon.com/dms/)
+ [AWS DMS blog posts](https://aws.amazon.com/blogs/database/tag/dms/)
+ [Amazon EC2 overview](https://aws.amazon.com/ec2/)
+ [Amazon EC2 FAQs](https://aws.amazon.com/ec2/faqs/)
+ [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)

# Migrate an on-premises Oracle database to Amazon RDS for MySQL using AWS DMS and AWS SCT
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct"></a>

*Sergey Dmitriev and Naresh Damera, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-summary"></a>

This pattern walks you through the migration of an on-premises Oracle database to an Amazon Relational Database Service (Amazon RDS) for MySQL DB instance. It uses AWS Database Migration Service (AWS DMS) to migrate the data, and AWS Schema Conversion Tool (AWS SCT) to convert the source database schema and objects to a format that's compatible with Amazon RDS for MySQL. 

## Prerequisites and limitations
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-prerequisites-and-limitations"></a>

**Prerequisites**
+ An active AWS account
+ A source Oracle database in an on-premises data center 

**Limitations**
+ Database size limit: 64 TB

**Product versions**
+ All Oracle database editions for versions 11g (versions 11.2.0.3.v1 and later) and up to 12.2, and 18c. For the latest list of supported versions, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html). We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support. For information about Oracle database versions supported by AWS SCT, see the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html). 
+ AWS DMS currently supports MySQL versions 5.5, 5.6, and 5.7. For the latest list of supported versions, see [Using a MySQL-Compatible Database as a Target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.MySQL.html) in the AWS documentation. 

## Architecture
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-architecture"></a>

**Source technology stack**
+ On-premises Oracle database

**Target technology stack**
+ Amazon RDS for MySQL DB instance

**Data migration architecture**

![\[AWS Cloud architecture showing data migration from on-premises to RDS via VPC, Internet Gateway, and AWS DMS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/0385e5ad-a1ca-4c29-945b-592321d95f9d/images/c872e033-b13a-4436-b503-0632b5d437ae.png)


 

## Tools
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-tools"></a>
+ **AWS DMS** - [AWS Database Migration Services](https://docs.aws.amazon.com/dms/latest/userguide/) (AWS DMS) helps you migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud, between on-premises instances (through an AWS Cloud setup), or between combinations of cloud and on-premises setups.
+ **AWS SCT** - [AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) (AWS SCT) is used to convert your database schema from one database engine to another. The custom code that the tool converts includes views, stored procedures, and functions. Any code that the tool cannot convert automatically is clearly marked so that you can convert it yourself.

## Epics
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-epics"></a>

### Plan the migration
<a name="plan-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target database version and engine. |  | DBA | 
|  Identify the hardware requirements for the target server instance. |  | DBA, SysAdmin | 
| Identify the storage requirements (storage type and capacity). |  | DBA, SysAdmin | 
| Choose the proper instance type based on capacity, storage features, and network features. |  | DBA, SysAdmin | 
| Identify the network access security requirements for the source and target databases. |  | DBA, SysAdmin  | 
| Identify the application migration strategy. |  | DBA, SysAdmin, App owner | 

### Configure the infrastructure
<a name="configure-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC) and subnets. |  | SysAdmin | 
| Create the security groups and network access control lists (ACLs). |  | SysAdmin | 
| Configure and start an Amazon RDS DB instance. |  | DBA, SysAdmin | 

### Migrate data
<a name="migrate-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate the database schema by using AWS SCT. |  | DBA | 
| Migrate data by using AWS DMS. |  | DBA | 

### Migrate the application
<a name="migrate-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Use AWS SCT to analyze and convert the SQL code inside the application code. | For more information, see https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP\$1Converting.App.html. | App owner | 
| Follow the application migration strategy. |  | DBA, SysAdmin, App owner | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Switch the application clients over to the new infrastructure. |  | DBA, SysAdmin, App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down the temporary AWS resources. |  | DBA, SysAdmin | 
| Review and validate the project documents. |  | DBA, SysAdmin | 
| Gather metrics around time to migrate, % of manual vs. tool, cost savings, etc. |  | DBA, SysAdmin | 
| Close out the project and provide feedback. |  |  | 

## Related resources
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-mysql-using-aws-dms-and-aws-sct-related-resources"></a>

**References**
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/)
+ [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) 
+ [Amazon RDS Pricing](https://aws.amazon.com/rds/pricing/)

**Tutorial and videos**
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/)
+ [Getting Started with Amazon RDS](https://aws.amazon.com/rds/getting-started/)
+ [AWS DMS (video)](https://www.youtube.com/watch?v=zb4GcjEdl8U) 
+ [Amazon RDS (video)](https://www.youtube.com/watch?v=igRfulrrYCo) 

# Migrate an on-premises Oracle database to Amazon RDS for PostgreSQL by using an Oracle bystander and AWS DMS
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms"></a>

*Cady Motyka, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms-summary"></a>

This pattern describes how you can migrate an on-premises Oracle database to either of the following PostgreSQL-compatible AWS database services with minimal downtime:
+ Amazon Relational Database Service (Amazon RDS) for PostgreSQL
+ Amazon Aurora PostgreSQL-Compatible Edition

The solution uses AWS Database Migration Service (AWS DMS) to migrate the data, AWS Schema Conversion Tool (AWS SCT) to convert the database schema, and an Oracle bystander database to help manage the migration. In this implementation, the downtime is restricted to however long it takes to create or validate all of the foreign keys on the database. 

The solution also uses Amazon Elastic Compute Cloud (Amazon EC2) instances with an Oracle bystander database to help control the stream of data through AWS DMS. You can temporarily pause streaming replication from the on-premises Oracle database to the Oracle bystander to activate AWS DMS to catch up on data validation, or to use another data validation tool. The Amazon RDS for PostgreSQL DB instance or Aurora PostgreSQL-Compatible DB instance and the bystander database will have the same data when AWS DMS finishes migrating current changes. 

## Prerequisites and limitations
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A source Oracle database in an on-premises data center with an Active Data Guard standby database configured
+ AWS Direct Connect configured between the on-premises data center and AWS Secrets Manager for storing the database secrets
+ Java Database Connectivity (JDBC) drivers for AWS SCT connectors, installed either on a local machine or on the EC2 instance where AWS SCT is installed
+ Familiarity with [using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ Familiarity with [using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)

**Limitations**
+ Database size limit: 64 TB

**Product versions**
+ AWS DMS supports all Oracle database editions for versions 10.2 and later (for versions 10.x), 11g and up to 12.2, 18c, and 19c. For the latest list of supported versions, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html). We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support. For information about Oracle database versions supported by AWS SCT, see the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html). 
+ AWS DMS supports PostgreSQL versions 9.4 and later (for versions 9.x), 10.x, 11.x, 12.x, and 13.x. For the latest information, see [Using a PostgreSQL Database as a Target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html) in the AWS documentation.

## Architecture
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms-architecture"></a>

**Source technology stack**
+ An on-premises Oracle database
+ An EC2 instance that holds a bystander for the Oracle database

**Target technology stack**
+ Amazon RDS for PostgreSQL or Aurora PostgreSQL instance, PostgreSQL 9.3 and later

**Target architecture**

The following diagram shows an example workflow for migrating an Oracle database to a PostgreSQL-compatible AWS database by using AWS DMS and an Oracle bystander:

![\[Migrating an on-premises Oracle database to PostgreSQL on AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6f5d5500-8b09-4bd1-8ef9-e670d58d07f8/images/1de98abd-c143-481a-b55f-e8d00eb96a38.png)


## Tools
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms-tools"></a>
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that’s compatible with the target database.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) helps you set up, operate, and scale a relational database in the AWS Cloud.

## Epics
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms-epics"></a>

### Convert the Oracle database schema to PostgreSQL
<a name="convert-the-oracle-database-schema-to-postgresql"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up AWS SCT. | Create a new report, and connect to Oracle as the source and PostgreSQL as the target. In **Project Settings**, go to the **SQL Scripting** tab. Change the **Target SQL Script** to **Multiple Files**. These files will be used later and named the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms.html) | DBA | 
| Convert the Oracle database schema. | In the **Action** tab, choose **Generate Report**. Then, choose **Convert Schema** and choose **Save as SQL**. | DBA | 
| Modify the scripts. | For example, you might want to modify the script if a number in the source schema has been converted to numeric format in PostgreSQL, but you want to use **BIGINT **instead for better performance. | DBA | 

### Create and configure the Amazon RDS DB instance
<a name="create-and-configure-the-amazon-rds-db-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Amazon RDS DB instance. | In the correct AWS Region, create a new PostgreSQL DB instance. For more information, see [Creating a PostgreSQL DB instance and connecting to a database on a PostgreSQL DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html) in the Amazon RDS documentation. | AWS SysAdmin, DBA | 
| Configure DB instance specifications. | Specify the DB engine version, DB instance class, Multi-AZ deployment, storage type, and allocated storage. Enter the DB instance identifier, a primary user name, and a primary password. | AWS SysAdmin, DBA | 
| Configure network and security. | Specify the virtual private cloud (VPC), subnet group, public accessibility, Availability Zone preference, and security groups. | DBA, SysAdmin | 
| Configure database options. | Specify the database name, port, parameter group, encryption, and KMS key. | AWS SysAdmin, DBA | 
| Configure backups. | Specify the backup retention period, backup window, start time, duration, and whether to copy tags to snapshots. | AWS SysAdmin, DBA | 
| Configure monitoring options. | Activate or deactivate enhanced monitoring and performance insights. | AWS SysAdmin, DBA | 
| Configure maintenance options. | Specify auto minor version upgrade, maintenance window, and start day, time, and duration. | AWS SysAdmin, DBA | 
| Run the pre-migration scripts from AWS SCT. | On the Amazon RDS instance, run the following scripts generated by AWS SCT:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms.html) | AWS SysAdmin, DBA | 

### Configure the Oracle bystander in Amazon EC2
<a name="configure-the-oracle-bystander-in-amazon-ec2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the network for Amazon EC2. | Create the new VPC, subnets, internet gateway, route tables, and security groups. | AWS SysAdmin | 
| Create the EC2 instance. | In the appropriate AWS Region, create a new EC2 instance. Select the Amazon Machine Image (AMI), choose the instance size, and configure instance details: number of instances (1), the VPC and subnet you created in the previous task, auto-assign public IP, and other options. Add storage, configure security groups, and launch. When prompted, create and save a key pair for the next step. | AWS SysAdmin | 
| Connect the Oracle source database to the EC2 instance. | Copy the IPv4 public IP address and DNS to a text file and connect by using SSH as follows: **ssh -i "your\$1file.pem" ec2-user@<your-IP-address-or-public-DNS>**. | AWS SysAdmin | 
| Set up the initial host for a bystander in Amazon EC2. | Set up SSH keys, bash profile, ORATAB, and symbolic links. Create Oracle directories. | AWS SysAdmin, Linux Admin | 
| Set up the database copy for a bystander in Amazon EC2 | Use RMAN to create a database copy, enable supplemental logging, and create the standby control file. After copying is complete, place the database in recovery mode. | AWS SysAdmin, DBA | 
| Set up Oracle Data Guard. | Modify your **listener.ora** file and start the listener. Set up a new archive destination. Place the bystander in recovery mode, replace temporary files to avoid future corruption, install a crontab if necessary to prevent the archive directory from running out of space, and edit the **manage-trclog-files-oracle.cfg** file for the source and standby. | AWS SysAdmin, DBA | 
| Prep the Oracle database to sync shipping. | Add the standby log files and change the recovery mode. Change the log shipping to **SYNC AFFIRM** on both the source primary and the source standby. Switch logs on primary, confirm via the Amazon EC2 bystander alert log that you are using the standby log files, and confirm that the redo stream is flowing in SYNC. | AWS SysAdmin, DBA | 

### Migrate data with AWS DMS
<a name="migrate-data-with-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a replication instance in AWS DMS. | Complete the fields for the name, instance class, VPC (same as the Amazon EC2 instance), Multi-AZ, and public accessibility. Under **Advance**, specify allocated storage, subnet group, Availability Zone, VPC security groups, and AWS Key Management Service (AWS KMS) key. | AWS SysAdmin, DBA | 
| Create the source database endpoint. | Specify the endpoint name, type, source engine (Oracle), server name (Amazon EC2 private DNS name), port, SSL mode, user name, password, SID, VPC (specify the VPC that has the replication instance), and replication instance. To test the connection, choose **Run Test**, and then create the endpoint. You can also configure the following advanced settings: **maxFileSize** and **numberDataTypeScale**. | AWS SysAdmin, DBA | 
| Connect AWS DMS to Amazon RDS for PostgreSQL. | Create a migration security group for connections across VPCs. | AWS SysAdmin, DBA | 
| Create the target database endpoint. | Specify the endpoint name, type, source engine (PostgreSQL), server name (Amazon RDS endpoint), port, SSL mode, user name, password, database name, VPC (specify the VPC that has the replication instance), and replication instance. To test the connection, choose **Run Test**, and then create the endpoint. You can also configure the following advanced settings: **maxFileSize **and **numberDataTypeScale**. | AWS SysAdmin, DBA | 
| Create the AWS DMS replication task. | Specify the task name, replication instance, source and target endpoints, and replication instance. For migration type, choose **Migrate existing data and replicate ongoing changes**. Clear the **Start task on create** checkbox. | AWS SysAdmin, DBA | 
| Configure the AWS DMS replication task settings. | For target table preparation mode, choose **Do nothing**. Stop task after full load completes (to create primary keys). Specify limited or full LOB mode, and activate control tables. Optionally, you can configure the **CommitRate** advance setting. | DBA | 
| Configure table mappings. | In the **Table mappings** section, create an **Include** rule for all tables in all schemas included in the migration, and then create an **Exclude** rule. Add three transformation rules to convert the schema, table, and column names to lowercase, and add any other rules needed for this specific migration. | DBA | 
| Start the task. | Start the replication task. Make sure that the full load is running. Run **ALTER SYSTEM SWITCH LOGFILE** on the primary Oracle database to kick-start the task. | DBA | 
| Run the mid-migration scripts from AWS SCT. | In Amazon RDS for PostgreSQL, run the following scripts generated by AWS SCT: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms.html) | DBA | 
| Restart the task to continue change data capture (CDC). | Run **VACUUM **on the Amazon RDS for PostgreSQL DB instance, and restart the AWS DMS task to apply cached CDC changes. | DBA | 

### Cut over to the PostgreSQL database
<a name="cut-over-to-the-postgresql-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Review the AWS DMS logs and validation tables for any errors. | Check and fix any replication or validation errors. | DBA | 
| Stop all Oracle dependencies. | Stop all Oracle dependencies, shut down listeners on the Oracle database, and run **ALTER SYSTEM SWITCH LOGFILE**. Stop the AWS DMS task when it shows no activity. | DBA | 
| Run the post-migration scripts from AWS SCT. | In Amazon RDS for PostgreSQL, run the following scripts generated by AWS SCT:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms.html) | DBA | 
| Complete additional Amazon RDS for PostgreSQL steps. | Increment sequences to match Oracle if needed, run **VACUUM** and **ANALYZE**, and take a snapshot for compliance. | DBA | 
| Open the connections to Amazon RDS for PostgreSQL. | Remove the AWS DMS security groups from Amazon RDS for PostgreSQL, add production security groups, and point your applications to the new database. | DBA | 
| Clean up AWS DMS objects. | Remove the endpoints, replication tasks, replication instances, and the EC2 instance. | SysAdmin, DBA | 

## Related resources
<a name="migrate-an-on-premises-oracle-database-to-amazon-rds-for-postgresql-by-using-an-oracle-bystander-and-aws-dms-resources"></a>
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/)
+ [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
+ [Amazon RDS for PostgreSQL pricing](https://aws.amazon.com/rds/postgresql/pricing/) 

# Migrate an Oracle Database to Amazon Redshift using AWS DMS and AWS SCT
<a name="migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct"></a>

*Piyush Goyal and Brian motzer, Amazon Web Services*

## Summary
<a name="migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct-summary"></a>

This pattern provides guidance for migrating Oracle databases to an Amazon Redshift cloud data warehouse in the Amazon Web Services (AWS) Cloud by using AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT). The pattern covers source Oracle databases that are on premises or installed on an Amazon Elastic Compute Cloud (Amazon EC2) instance. It also covers Amazon Relational Database Service (Amazon RDS) for Oracle databases.

## Prerequisites and limitations
<a name="migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct-prereqs"></a>

**Prerequisites **
+ An Oracle database that is running in an on-premises data center or in the AWS Cloud
+ An active AWS account
+ Familiarity with [using an Oracle database as a source for AWS DMS](http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ Familiarity with [using an Amazon Redshift database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html)
+ Knowledge of Amazon RDS, Amazon Redshift, the applicable database technologies, and SQL
+ Java Database Connectivity (JDBC) drivers for AWS SCT connectors, where AWS SCT is installed

**Product versions**
+ For self-managed Oracle databases, AWS DMS supports all Oracle database editions for versions 10.2 and later (for versions 10.*x*), 11g and up to 12.2, 18c, and 19c. For Amazon RDS for Oracle databases that AWS manages, AWS DMS supports all Oracle database editions for versions 11g (versions 11.2.0.4 and later) and up to 12.2, 18c, and 19c. We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support.

## Architecture
<a name="migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct-architecture"></a>

**Source technology stack  **

One of the following:
+ An on-premises Oracle database
+ An Oracle database on an EC2 instance
+ An Amazon RDS for Oracle DB instance

**Target technology stack  **
+ Amazon Redshift

**Target architecture **

*From an Oracle database running in the AWS Cloud to Amazon Redshift:*

![\[Migrating an Oracle database in the AWS Cloud to an Amazon Redshift data warehouse.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/22807be0-c7e0-49c6-8923-7d23bf83a50d/images/7140e819-81d6-45c4-805b-8e10828076a7.png)


*From an Oracle database running in an on-premises data center to Amazon Redshift:*

![\[Migrating an on-premises Oracle database to an Amazon Redshift data warehouse.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/22807be0-c7e0-49c6-8923-7d23bf83a50d/images/d6654b48-0e1b-4b01-a261-5a640be01fd7.png)


## Tools
<a name="migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct-tools"></a>
+ [AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) -  AWS Data Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. AWS DMS can migrate your data to and from most widely used commercial and open-source databases. 
+ [AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) - AWS Schema Conversion Tool (AWS SCT) can be used to convert your existing database schema from one database engine to another. It supports various database engines, including Oracle, SQL Server, and PostgresSQL, as sources.

## Epics
<a name="migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct-epics"></a>

### Prepare for the migration
<a name="prepare-for-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the database versions. | Validate the source and target database versions and make sure they are supported by AWS DMS. For information about supported Oracle Database versions, see [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html). For information about using Amazon Redshift as a target, see [Using an Amazon Redshift database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html). | DBA | 
| Create a VPC and security group. | In your AWS account, create a virtual private cloud (VPC), if it doesn’t exist. Create a security group for outbound traffic to source and target databases. For more information, see the [Amazon Virtual Private Cloud (Amazon VPC) documentation](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). | Systems administrator | 
| Install AWS SCT. | Download and install the latest version of AWS SCT and its corresponding drivers. For more information, see [Installing, verifying, and updating the AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html). | DBA | 
| Create a user for the AWS DMS task. | Create an AWS DMS user in the source database and grant it READ privileges. This user will be used by both AWS SCT and AWS DMS. | DBA | 
| Test the DB connectivity. | Test the connectivity to the Oracle DB instance. | DBA | 
| Create a new project in AWS SCT. | Open the AWS SCT tool and create a new project. | DBA | 
| Analyze the Oracle schema to be migrated. | Use AWS SCT to analyze the schema to be migrated, and generate a database migration assessment report. For more information, see [Creating a database migration assessment report](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_AssessmentReport.Create.html) in the AWS SCT documentation. | DBA | 
| Review the assessment report. | Review the report for migration feasibility. Some DB objects might require manual conversion. For more information about the report, see [Viewing the assessment report](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_AssessmentReport.View.html) in the AWS SCT documentation. | DBA | 

### Prepare the target database
<a name="prepare-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon Redshift cluster. | Create an Amazon Redshift cluster within the VPC that you created previously. For more information, see [Amazon Redshift clusters](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html) in the Amazon Redshift documentation. | DBA | 
| Create database users. | Extract the list of users, roles, and grants from the Oracle source database. Create users in the target Amazon Redshift database and apply the roles from the previous step. | DBA | 
| Evaluate database parameters. | Review the database options, parameters, network files, and database links from the Oracle source database, and evaluate their applicability to the target.             | DBA | 
| Apply any relevant settings to the target.  | For more information about this step, see [Configuration reference](https://docs.aws.amazon.com/redshift/latest/dg/cm_chap_ConfigurationRef.html) in the Amazon Redshift documentation. | DBA | 

### Create objects in the target database
<a name="create-objects-in-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS DMS user in the target database. | Create an AWS DMS user in the target database and grant it read and write privileges. Validate the connectivity from AWS SCT. | DBA | 
| Convert the schema, review the SQL report, and save any errors or warnings. | For more information, see [Converting database schemas using the AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Converting.html) in the AWS SCT documentation. | DBA | 
| Apply the schema changes to the target database or save them as a .sql file. | For instructions, see [Saving and applying your converted schema in the AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Converting.DW.html#CHAP_Converting.DW.SaveAndApply) in the AWS SCT documentation. | DBA | 
| Validate the objects in the target database. | Validate the objects that were created in the previous step in the target database. Rewrite or redesign any objects that weren’t successfully converted. | DBA | 
| Disable foreign keys and triggers. | Disable any foreign key and triggers. These can cause data loading issues during the full load process when running AWS DMS. | DBA | 

### Migrate data using AWS DMS
<a name="migrate-data-using-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS DMS replication instance. | Sign in to the AWS Management Console, and open the AWS DMS console. In the navigation pane, choose **Replication instances**, **Create replication instance**. For detailed instructions, see [step 1](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html#CHAP_GettingStarted.ReplicationInstance) in *Getting started with AWS DMS* in the AWS DMS documentation. | DBA | 
| Create source and target endpoints. | Create source and target endpoints, Test the connection from the replication instance to both source and target endpoints. For detailed instructions, see [step 2](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html#CHAP_GettingStarted.Endpoints) in *Getting started with AWS DMS* in the AWS DMS documentation. | DBA | 
| Create a replication task. | Create a replication task and select the appropriate migration method. For detailed instructions, see [step 3](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html#CHAP_GettingStarted.Tasks) in *Getting started with AWS DMS* in the AWS DMS documentation. | DBA | 
| Start the data replication. | Start the replication task and monitor the logs for any errors. | DBA | 

### Migrate your application
<a name="migrate-your-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create application servers. | Create the new application servers on AWS. | Application owner | 
| Migrate the application code. | Migrate the application code to the new servers. | Application owner | 
| Configure the application server. | Configure the application server for the target database and drivers. | Application owner | 
| Optimize the application code. | Optimize the application code for the target engine. | Application owner | 

### Cut over to the target database
<a name="cut-over-to-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate users. | In the target Amazon Redshift database, validate users and grant them roles and privileges. | DBA | 
| Validate that the application is locked. | Make sure that the application is locked, to prevent further changes. | Application owner | 
| Validate the data. | Validate the data in the target Amazon Redshift database. | DBA | 
| Enable foreign keys and triggers. | Enable foreign keys and triggers in the target Amazon Redshift database. | DBA | 
| Connect to the new database. | Configure the application to connect to the new Amazon Redshift database. | Application owner | 
| Perform final checks. | Perform a final, comprehensive system check before going live. | DBA, Application owner | 
| Go live. | Go live with the target Amazon Redshift database. | DBA | 

### Close the migration project
<a name="close-the-migration-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down temporary AWS resources. | Shut down temporary AWS resources such as the AWS DMS replication instance and the EC2 instance used for AWS SCT.  | DBA, Systems administrator | 
| Review documents.  | Review and validate the migration project documents.     | DBA, Systems administrator | 
| Gather metrics. | Collect information about the migration project, such as the time to migrate, the percentage of manual versus tool tasks, and total cost savings.  | DBA, Systems administrator | 
| Close out the project. | Close out the project and provide feedback. | DBA, Systems administrator | 

## Related resources
<a name="migrate-an-oracle-database-to-amazon-redshift-using-aws-dms-and-aws-sct-resources"></a>

**References**
+ [AWS DMS user guide](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
+ [AWS SCT user guide](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) 
+ [Amazon Redshift getting started guide](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html)

**Tutorials and videos**
+ [Dive Deep into AWS SCT and AWS DMS](https://www.youtube.com/watch?v=kJs9U4ys5FE) (presentation from AWS re:Invent 2019)
+ [Getting Started with AWS Database Migration Service](https://aws.amazon.com/dms/getting-started/)

# Migrate an Oracle database to Aurora PostgreSQL using AWS DMS and AWS SCT
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct"></a>

*Senthil Ramasamy, Amazon Web Services*

## Summary
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct-summary"></a>

This pattern describes how to migrate an Oracle database to Amazon Aurora PostgreSQL-Compatible Edition by using AWS Data Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT). 

The pattern covers source Oracle databases that are on premises, Oracle databases that are installed on Amazon Elastic Compute Cloud (Amazon EC2) instances, and Amazon Relational Database Service (Amazon RDS) for Oracle databases. The pattern converts these databases to Aurora PostgreSQL-Compatible.

## Prerequisites and limitations
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An Oracle database in an on-premises data center or in the AWS Cloud.
+ SQL clients installed either on a local machine or on an EC2 instance.
+ Java Database Connectivity (JDBC) drivers for AWS SCT connectors, installed on either a local machine or an EC2 instance where AWS SCT is installed. 

**Limitations**
+ Database size limit: 128 TB 
+ If the source database supports a commercial off-the-shelf (COTS) application or is vendor-specific, you might not be able to convert it to another database engine. Before using this pattern, confirm that the application supports Aurora PostgreSQL-Compatible.  

**Product versions**
+ For self-managed Oracle databases, AWS DMS supports all Oracle database editions for versions 10.2 and later (for versions 10.x), 11g, and up to 12.2, 18c, and 19c. For the latest list of supported Oracle database versions (both self-managed and Amazon RDS for Oracle), see [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) and [Using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html). 
+ We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support. For information about Oracle database versions supported by AWS SCT, see the [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html). 
+ Aurora supports the PostgreSQL versions listed in [Amazon Aurora PostgreSQL releases and engine versions](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.20180305.html).

## Architecture
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct-architecture"></a>

**Source technology stack**

One of the following:
+ An on-premises Oracle database
+ An Oracle database on an EC2 instance  
+ An Amazon RDS for Oracle DB instance

**Target technology stack**
+ Aurora PostgreSQL-Compatible 

**Target architecture**

![\[Target architecture for migrating Oracle databases to Aurora PostgreSQL-Compatible.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6de157c4-dcc9-4186-ae32-17efbbbee709/images/68beb634-926e-4908-97b1-edcd23e06a2b.png)


**Data migration architecture**
+ From an Oracle database running in the AWS Cloud   
![\[Data migration architecture for an Oracle database on AWS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6de157c4-dcc9-4186-ae32-17efbbbee709/images/7fc32019-3db1-485b-93e5-6d5539be048c.png)

   
+ From an Oracle database running in an on-premises data center  
![\[Data migration architecture for an Oracle database in an on-premises data center.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6de157c4-dcc9-4186-ae32-17efbbbee709/images/c70d8774-aef7-4414-9766-ce8f25757c4b.png)

## Tools
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct-tools"></a>
+ [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) helps you migrate data stores into the AWS Cloud or between combinations of cloud and on-premises setups.
+ [AWS Schema Conversion Tool (AWS SCT)](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) supports heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format compatible with the target database.

## Epics
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct-epics"></a>

### Prepare for the migration
<a name="prepare-for-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the source database. | To prepare the source database, see [Using Oracle Database as a source for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.Oracle.html) in the AWS SCT documentation. | DBA | 
| Create an EC2 instance for AWS SCT. | Create and configure an EC2 instance for AWS SCT, if required. | DBA | 
| Download AWS SCT. | Download the latest version of AWS SCT and associated drivers. For more information, see [Installing, verifying, and updating AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Installing.html) in the AWS SCT documentation. | DBA | 
| Add users and permissions. | Add and validate the prerequisite users and permissions in the source database. | DBA | 
| Create an AWS SCT project. | Create an AWS SCT project for the workload, and connect to the source database. For instructions, see [Creating an AWS SCT project](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html#CHAP_UserInterface.Project) and [Adding database servers](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html#CHAP_UserInterface.AddServers) in the AWS SCT documentation. | DBA | 
| Evaluate feasibility. | Generate an assessment report, which summarizes action items for schemas that can’t be converted automatically and provides estimates for manual conversion efforts. For more information, see [Creating and reviewing the database migration assessment report](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_UserInterface.html#CHAP_UserInterface.AssessmentReport) in the AWS SCT documentation. | DBA | 

### Prepare the target database
<a name="prepare-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a target Amazon RDS DB instance. | Create a target Amazon RDS DB instance, using Amazon Aurora as the database engine. For instructions, see [Creating an Amazon RDS DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) in the Amazon RDS documentation. | DBA | 
| Extract users, roles, and permissions. | Extract the list of users, roles, and permissions from the source database. | DBA | 
| Map users. | Map the existing database users to the new database users. | App owner | 
| Create users. | Create users in the target database. | DBA, App owner | 
| Apply roles. | Apply roles from the previous step to the target database. | DBA | 
| Check options, parameters, network files, and database links. | Review the source database for options, parameters, network files, and database links, and then evaluate their applicability to the target database. | DBA | 
| Apply settings. | Apply any relevant settings to the target database. | DBA | 

### Transfer objects
<a name="transfer-objects"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS SCT connectivity. | Configure AWS SCT connectivity to the target database. | DBA | 
| Convert the schema using AWS SCT. | AWS SCT automatically converts the source database schema and most of the custom code to a format that is compatible with the target database. Any code that the tool cannot convert automatically is clearly marked so that you can convert it manually. | DBA | 
| Review the report. | Review the generated SQL report and save any errors and warnings. | DBA | 
| Apply automated schema changes. | Apply automated schema changes to the target database or save them as a .sql file. | DBA | 
| Validate objects. | Validate that AWS SCT created the objects on the target.  | DBA | 
| Handle items that weren't converted. | Manually rewrite, reject, or redesign any items that failed to convert automatically. | DBA, App owner | 
| Apply role and user permissions. | Apply the generated role and user permissions and review any exceptions. | DBA | 

### Migrate the data
<a name="migrate-the-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Determine the method. | Determine the method for migrating data. | DBA | 
| Create a replication instance. | Create a replication instance from the AWS DMS console. For more information, see [Working with an AWS DMS replication instance](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html) in the AWS DMS documentation. | DBA | 
| Create the source and target endpoints. | To create endpoints, follow the instructions in [Creating source and target endpoints in the AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.Creating.html) documentation. | DBA | 
| Create a replication task. | To create a task, see [Working with AWS DMS tasks](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.html) in the AWS DMS documentation. | DBA | 
| Start the replication task and monitor the logs. | For more information about this step, see [Monitoring AWS DMS tasks](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html) in the AWS DMS documentation. | DBA | 

### Migrate the application
<a name="migrate-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Analyze and convert SQL items in the application code. | Use AWS SCT to analyze and convert the SQL items in the application code. When you convert your database schema from one engine to another, you also need to update the SQL code in your applications to interact with the new database engine instead of the old one. You can view, analyze, edit, and save the converted SQL code. | App owner | 
| Create application servers. | Create the new application servers on AWS. | App owner | 
| Migrate the application code. | Migrate the application code to the new servers. | App owner | 
| Configure the application servers. | Configure the application servers for the target database and drivers. | App owner | 
| Fix code. | Fix any code that’s specific to the source database engine in your application. | App owner | 
| Optimize code. | Optimize your application code for the target database engine. | App owner | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Cut over to the target database. | Perform the cutover to the new database. | DBA | 
| Lock the application. | Lock the application from any further changes. | App owner | 
| Validate changes. | Validate that all changes were propagated to the target database. | DBA | 
| Redirect to the target database. | Point the new application servers to the target database. | App owner | 
| Check everything. | Perform a final, comprehensive system check. | App owner | 
| Go live. | Complete final cutover tasks. | App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down temporary resources. | Shut down the temporary AWS resources such as the AWS DMS replication instance and the EC2 instance used for AWS SCT. | DBA, App owner | 
| Update feedback. | Update feedback on the AWS DMS process for internal teams. | DBA, App owner | 
| Revise process and templates. | Revise the AWS DMS process and improve the template if necessary. | DBA, App owner | 
| Validate documents. | Review and validate the project documents. | DBA, App owner | 
| Gather metrics. | Gather metrics to evaluate the time to migrate, percent of manual versus tool cost savings, and so on. | DBA, App owner | 
| Close the project. | Close the migration project and provide feedback to stakeholders. | DBA, App owner | 

## Related resources
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct-resources"></a>

**References**
+ [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ [Using a PostgreSQL Database as a Target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)
+ [Oracle Database 11g/12c to Amazon Aurora with PostgreSQL Compatibility (9.6.x) Migration Playbook](https://d1.awsstatic.com/whitepapers/Migration/oracle-database-amazon-aurora-postgresql-migration-playbook.pdf) 
+ [Oracle Database 19c to Amazon Aurora with PostgreSQL Compatibility (12.4) Migration Playbook](https://d1.awsstatic.com/whitepapers/Migration/oracle-database-amazon-aurora-postgresql-migration-playbook-12.4.pdf)
+ [Migrating an Amazon RDS for Oracle database to Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/dms/latest/sbs/chap-oracle-postgresql.html)
+ [AWS Data Migration Service](https://aws.amazon.com/dms/)
+ [AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) 
+ [Migrate from Oracle to Amazon Aurora](https://aws.amazon.com/getting-started/projects/migrate-oracle-to-amazon-aurora/)
+ [Amazon RDS pricing](https://aws.amazon.com/rds/pricing/)

**Tutorials and videos**
+ [Database Migration Step-by-Step Walkthroughs](https://docs.aws.amazon.com/dms/latest/sbs/DMS-SBS-Welcome.html)
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/)
+ [Getting Started with Amazon RDS](https://aws.amazon.com/rds/getting-started/)
+ [AWS Data Migration Service](https://www.youtube.com/watch?v=zb4GcjEdl8U) (video)
+ [Migrating an Oracle database to PostgreSQL](https://www.youtube.com/watch?v=ibtNkChGFkw) (video)

## Additional information
<a name="migrate-an-oracle-database-to-aurora-postgresql-using-aws-dms-and-aws-sct-additional"></a>

.

# Migrate data from an on-premises Oracle database to Aurora PostgreSQL
<a name="migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql"></a>

*Michelle Deng and Shunan Xiang, Amazon Web Services*

## Summary
<a name="migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql-summary"></a>

This pattern provides guidance for data migration from an on-premises Oracle database to Amazon Aurora PostgreSQL-Compatible Edition. It targets an online data migration strategy with a minimal amount of downtime for multi-terabyte Oracle databases that contain large tables with high data manipulation language (DML) activities. An Oracle Active Data Guard standby database is used as the source to offload data migration from the primary database. The replication from the Oracle primary database to standby can be suspended during the full load to avoid ORA-01555 errors. 

Table columns in primary keys (PKs) or foreign keys (FKs), with data type NUMBER, are commonly used to store integers in Oracle. We recommend that you convert these to INT or BIGINT in PostgreSQL for better performance. You can use the AWS Schema Conversion Tool (AWS SCT) to  change the default data type mapping for PK and FK columns. (For more information, see the AWS blog post [Convert the NUMBER data type from Oracle to PostgreSQL](https://aws.amazon.com/blogs/database/convert-the-number-data-type-from-oracle-to-postgresql-part-2/).) The data migration in this pattern uses AWS Database Migration Service (AWS DMS) for both full load and change data capture (CDC).

You can also use this pattern to migrate an on-premises Oracle database to Amazon Relational Database Service (Amazon RDS) for PostgreSQL, or an Oracle database that's hosted on Amazon Elastic Compute Cloud (Amazon EC2) to either Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible.

## Prerequisites and limitations
<a name="migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An Oracle source database in an on-premises data center with Active Data Guard standby configured 
+ AWS Direct Connect configured between the on-premises data center and the AWS Cloud
+ Familiarity with [using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html)
+ Familiarity with [using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html)

**Limitations**
+ Amazon Aurora database clusters can be created with up to 128 TiB of storage. Amazon RDS for PostgreSQL database instances can be created with up to 64 TiB of storage. For the latest storage information, see [Amazon Aurora storage and reliability](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.StorageReliability.html) and [Amazon RDS DB instance storage](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html) in the AWS documentation.

**Product versions**
+ AWS DMS supports all Oracle database editions for versions 10.2 and later (for versions 10.x), 11g and up to 12.2, 18c, and 19c. For the latest list of supported versions, see [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) in the AWS documentation. 

## Architecture
<a name="migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql-architecture"></a>

**Source technology stack**
+ On-premises Oracle databases with Oracle Active Data Guard standby configured 

**Target technology stack**
+ Aurora PostgreSQL-Compatible 

**Data migration architecture**

![\[Migrating an Oracle database to Aurora PostgreSQL-Compatible\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/49f9b03e-6d33-4ac0-94ad-d3e6d02e6d63/images/0038a36b-fb7d-4f2d-8376-8d38290b0736.png)


## Tools
<a name="migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql-tools"></a>
+ **AWS DMS** - [AWS Database Migration Service](https://docs.aws.amazon.com/dms/index.html) (AWS DMS) supports several source and target databases. See [Using an Oracle Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) in the AWS DMS documentation for a list of supported Oracle source and target database versions and editions. If the source database is not supported by AWS DMS, you must select another method for migrating the data in Phase 6 (in the *Epics *section). **Important note:**  Because this is a heterogeneous migration, you must first check to see whether the database supports a commercial off-the-shelf (COTS) application. If the application is COTS, consult the vendor to confirm that Aurora PostgreSQL-Compatible is supported before proceeding. For more information, see [AWS DMS Step-by-Step Migration Walkthroughs](https://docs.aws.amazon.com/dms/latest/sbs/DMS-SBS-Welcome.html) in the AWS documentation.
+ **AWS SCT** - The [AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/Welcome.htm) (AWS SCT) facilitates heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that's compatible with the target database. The custom code that the tool converts includes views, stored procedures, and functions. Any code that the tool cannot convert automatically is clearly marked so that you can convert it yourself. 

## Epics
<a name="migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql-epics"></a>

### Plan the migration
<a name="plan-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target database versions. |  | DBA | 
| Install AWS SCT and drivers. |  | DBA | 
| Add and validate the AWS SCT prerequisite users and grants-source database. |  | DBA | 
| Create an AWS SCT project for the workload, and connect to the source database. |  | DBA | 
| Generate an assessment report and evaluate feasibility. |  | DBA, App owner | 

### Prepare the target database
<a name="prepare-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Aurora PostgreSQL-Compatible target database. |  | DBA | 
| Extract users, roles, and grants list from the source database. |  | DBA | 
| Map the existing database users to the new database users. |  | App owner | 
| Create users in the target database. |  | DBA | 
| Apply roles from the previous step to the target Aurora PostgreSQL-Compatible database. |  | DBA | 
| Review database options, parameters, network files, and database links from the source database, and evaluate their applicability to the target database. |  | DBA, App owner | 
| Apply any relevant settings to the target database. |  | DBA | 

### Prepare for database object code conversion
<a name="prepare-for-database-object-code-conversion"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS SCT connectivity to the target database. |  | DBA | 
| Convert the schema in AWS SCT, and save the converted code as a .sql file. |  | DBA, App owner | 
| Manually convert any database objects that failed to convert automatically. |  | DBA, App owner | 
| Optimize the database code conversion. |  | DBA, App owner | 
| Separate the .sql file into multiple .sql files based on the object type. |  | DBA, App owner | 
| Validate the SQL scripts in the target database. |  | DBA, App owner | 

### Prepare for data migration
<a name="prepare-for-data-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS DMS replication instance. |  | DBA | 
| Create the source and target endpoints.  | If the data type of the PKs and FKs is converted from NUMBER in Oracle to BIGINT in PostgreSQL, consider specifying the connection attribute `numberDataTypeScale=-2` when you create the source endpoint. | DBA | 

### Migrate data – full load
<a name="migrate-data-ndash-full-load"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the schema and tables in the target database. |  | DBA | 
|  Create AWS DMS full-load tasks by either grouping tables or splitting a big table based on the table size. |  | DBA | 
| Stop the applications on the source Oracle databases for a short period. |  | App owner | 
| Verify that the Oracle standby database is synchronous with the primary database, and stop the replication from the primary database to the standby database. |  | DBA, App owner | 
| Start applications on the source Oracle database. |  | App owner | 
| Start the AWS DMS full-load tasks in parallel from the Oracle standby database to the Aurora PostgreSQL-Compatible database. |  | DBA | 
| Create PKs and secondary indexes after the full load is complete. |  | DBA | 
| Validate the data. |  | DBA | 

### Migrate data – CDC
<a name="migrate-data-ndash-cdc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create AWS DMS ongoing replication tasks by specifying a custom CDC start time or system change number (SCN) when the Oracle standby was synchronized with the primary database, and before the applications were restarted in the previous task. |  | DBA | 
| Start AWS DMS tasks in parallel to replicate ongoing changes from the Oracle standby database to the Aurora PostgreSQL-Compatible database. |  | DBA | 
| Re-establish the replication from the Oracle primary database to the standby database. |  | DBA | 
| Monitor the logs and stop the applications on the Oracle database when the target Aurora PostgreSQL-Compatible database is almost synchronous with the source Oracle database. |  | DBA, App owner | 
| Stop the AWS DMS tasks when the target is fully synchronized with the source Oracle database. |  | DBA | 
| Create FKs and validate the data in the target database. |  | DBA | 
| Create functions, views, triggers, sequences, and other object types in the target database. |  | DBA | 
| Apply role grants in the target database. |  | DBA | 

### Migrate the application
<a name="migrate-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Use AWS SCT to analyze and convert the SQL statements inside the application code. |  | App owner | 
| Create new application servers on AWS. |  | App owner | 
| Migrate the application code to the new servers. |  | App owner | 
| Configure the application server for the target database and drivers. |  | App owner | 
| Fix any code that's specific to the source database engine in the application. |  | App owner | 
| Optimize the application code for the target database. |  | App owner | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Point the new application server to the target database. |  | DBA, App owner | 
| Perform sanity checks. |  | DBA, App owner | 
| Go live. |  | DBA, App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down temporary AWS resources. |  | DBA, Systems administrator | 
| Review and validate the project documents. |  | DBA, App owner | 
| Gather metrics for time to migrate, percentage of manual versus tool use, cost savings, and similar data. |  | DBA, App owner | 
| Close out the project and provide feedback. |  | DBA, App owner | 

## Related resources
<a name="migrate-data-from-an-on-premises-oracle-database-to-aurora-postgresql-resources"></a>

**References**
+ [Oracle Database to Aurora PostgreSQL-Compatible: Migration Playbook](https://d1.awsstatic.com/whitepapers/Migration/oracle-database-amazon-aurora-postgresql-migration-playbook.pdf) 
+ [Migrating an Amazon RDS for Oracle Database to Amazon Aurora MySQL](https://docs.aws.amazon.com/dms/latest/sbs/chap-rdsoracle2aurora.html)
+ [AWS DMS website](https://aws.amazon.com/dms/)
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)
+ [AWS SCT website](https://aws.amazon.com/dms/schema-conversion-tool/)
+ [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
+ [Migrate from Oracle to Amazon Aurora](https://aws.amazon.com/getting-started/projects/migrate-oracle-to-amazon-aurora/)

**Tutorials**
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/) 
+ [Getting Started with Amazon RDS](https://aws.amazon.com/rds/getting-started/)
+ [AWS Database Migration Service Step-by-Step Walkthroughs](https://docs.aws.amazon.com/dms/latest/sbs/dms-sbs-welcome.html)

# Migrate from SAP ASE to Amazon RDS for SQL Server using AWS DMS
<a name="migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms"></a>

*Amit Kumar, Amazon Web Services*

## Summary
<a name="migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms-summary"></a>

This pattern provides guidance for migrating an SAP Adaptive Server Enterprise (ASE) database to an Amazon Relational Database Service (Amazon RDS) DB instance that's running Microsoft SQL Server. The source database can be located in an on-premises data center or on an Amazon Elastic Compute Cloud (Amazon EC2) instance. The pattern uses AWS Database Migration Service (AWS DMS) to migrate data and (optionally) computer-aided software engineering (CASE) tools to convert the database schema. 

## Prerequisites and limitations
<a name="migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An SAP ASE database in an on-premises data center or on an EC2 instance
+ A target Amazon RDS for SQL Server database that’s up and running

**Limitations**
+ Database size limit: 64 TB

**Product versions**
+ SAP ASE version 15.7 or 16.x only. For the latest information, see [Using an SAP Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html).
+ For Amazon RDS target databases, AWS DMS supports [Microsoft SQL Server versions on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html#SQLServer.Concepts.General.VersionSupport) for the Enterprise, Standard, Web, and Express editions. For the latest information about supported versions, see the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.SQLServer.html). We recommend that you use the latest version of AWS DMS for the most comprehensive version and feature support.  

## Architecture
<a name="migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms-architecture"></a>

**Source technology stack **
+ An SAP ASE database that's on premises or on an Amazon EC2 instance

**Target technology stack **
+ An Amazon RDS for SQL Server DB instance

**Source and target architecture**

*From an SAP ASE database on Amazon EC2 to an Amazon RDS for SQL Server DB instance:*

![\[Target architecture for SAP ASE on Amazon EC2 to Amazon RDS for SQL Server\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5ca697a2-9ca3-4231-b457-c1dc59ada5f1/images/957bdcf0-ab58-4b6d-a71a-d0ecbc31822c.png)


*From an on-premises SAP ASE database to an Amazon RDS for SQL Server DB instance:*

![\[Target architecture for on-premises SAP ASE to Amazon RDS for SQL Server\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/5ca697a2-9ca3-4231-b457-c1dc59ada5f1/images/65aab2f5-0e63-4c34-97e2-cd4ac23751a4.png)


## Tools
<a name="migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms-tools"></a>
+ [AWS Database Migration Service](https://docs.aws.amazon.com/dms/) (AWS DMS) is a web service you can use to migrate data from your database that is on-premises, on an Amazon RDS DB instance, or in a database on an EC2 instance, to a database on an AWS service such as Amazon RDS for SQL Server or an EC2 instance. You can also migrate a database from an AWS service to an on-premises database. You can migrate data between heterogeneous or homogeneous database engines.
+ For schema conversions, you can optionally use [erwin Data Modeler](https://erwin.com/products/erwin-data-modeler/) or [SAP PowerDesigner](https://www.sap.com/products/technology-platform/powerdesigner-data-modeling-tools.html).

## Epics
<a name="migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms-epics"></a>

### Plan the migration
<a name="plan-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target database versions. |  | DBA | 
| Identify the storage requirements (storage type and capacity). |  | DBA, SysAdmin | 
| Choose the proper instance type based on capacity, storage features, and network features. |  | DBA, SysAdmin | 
| Identify the network access security requirements for the source and target databases. |  | DBA, SysAdmin | 
| Identify the application migration strategy. |  | DBA, SysAdmin, App owner | 

### Configure the infrastructure
<a name="configure-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC) and subnets. |  | SysAdmin | 
| Create security groups and network access control lists (ACLs). |  | SysAdmin | 
| Configure and start an Amazon RDS DB instance. |  | SysAdmin | 

### Migrate data - option 1
<a name="migrate-data---option-1"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate the database schema manually or use a CASE tool such as erwin Data Modeler or SAP PowerDesigner. |  | DBA | 

### Migrate data - option 2
<a name="migrate-data---option-2"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate data using AWS DMS. |  | DBA | 

### Migrate the application
<a name="migrate-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the application migration strategy. |  | DBA, SysAdmin, App owner | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Switch the application clients over to the new infrastructure. |  | DBA, SysAdmin, App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down the temporary AWS resources. |  | DBA, SysAdmin | 
| Review and validate the project documents. |  | DBA, SysAdmin, App owner | 
| Gather metrics such as time to migrate, percentage of manual versus automated tasks, and cost savings. |  | DBA, SysAdmin, App owner | 
| Close out the project and provide feedback. |  | DBA, SysAdmin, App owner | 

## Related resources
<a name="migrate-from-sap-ase-to-amazon-rds-for-sql-server-using-aws-dms-resources"></a>

**References**
+ [AWS DMS website](https://aws.amazon.com/dms/)
+ [Amazon RDS Pricing](https://aws.amazon.com/rds/pricing/)
+ [Using a SAP ASE Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html)
+ [Limitations for RDS Custom for SQL Server](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/custom-reqs-limits-MS.html)

**Tutorials and videos**
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/)
+ [Getting Started with Amazon RDS](https://aws.amazon.com/rds/getting-started/)
+ [AWS DMS (video)](https://www.youtube.com/watch?v=zb4GcjEdl8U) 
+ [Amazon RDS (video)](https://www.youtube.com/watch?v=igRfulrrYCo) 

# Migrate an on-premises Microsoft SQL Server database to Amazon Redshift using AWS DMS
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms"></a>

*Marcelo Fernandes, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms-summary"></a>

This pattern provides guidance for migrating an on-premises Microsoft SQL Server database to Amazon Redshift by using AWS Data Migration Service (AWS DMS). 

## Prerequisites and limitations
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A source Microsoft SQL Server database in an on-premises data center
+ Completed prerequisites for using an Amazon Redshift database as a target for AWS DMS, as discussed in the [AWS DMS documentation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html#CHAP_Target.Redshift.Prerequisites)

**Product versions**
+ SQL Server 2005-2019, Enterprise, Standard, Workgroup, Developer, and Web editions. For the latest list of supported versions, see [Using a Microsoft SQL Server Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html) in the AWS documentation. 

## Architecture
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms-architecture"></a>

**Source technology stack**
+ An on-premises Microsoft SQL Server database 

**Target technology stack**
+ Amazon Redshift

**Data migration architecture**

 

![\[Architecture for migrating an on-premises SQL Server database to Amazon Redshift using AWS DMS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/65b2be1b-740e-4d4d-99a8-f77c4ea6553d/images/3a094bf2-be31-4d83-8dd2-9dc078321055.png)


## Tools
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms-tools"></a>
+ [AWS DMS ](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html)is a data migration service that supports several types of source and target databases. For information about the Microsoft SQL Server database versions and editions that are supported for use with AWS DMS, see [Using a Microsoft SQL Server Database as a Source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html) in the AWS DMS documentation. If AWS DMS doesn't support your source database, you must select an alternative method for data migration.

## Epics
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms-epics"></a>

### Plan the migration
<a name="plan-the-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target database version and engine. |  | DBA | 
| Identify the hardware requirements for the target server instance. |  | DBA, Systems administrator | 
| Identify the storage requirements (storage type and capacity). |  | DBA, Systems administrator | 
| Choose the proper instance type based on capacity, storage features, and network features. |  | DBA, Systems administrator | 
| Identify the network access security requirements for the source and target databases. |  | DBA, Systems administrator | 
| Identify the application migration strategy. |  | DBA, App owner, Systems administrator | 

### Configure the infrastructure
<a name="configure-the-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC). | For more information, see [Working with a DB instance in a VPC](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html) in the AWS documentation. | Systems administrator | 
| Create security groups. |  | Systems administrator | 
| Configure and start an Amazon Redshift cluster. | For more information, see [Create a sample Amazon Redshift cluster](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-launch-sample-cluster.html) in the Amazon Redshift documentation. | DBA, Systems administrator | 

### Migrate data
<a name="migrate-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate the data from the Microsoft SQL Server database by using AWS DMS. |  | DBA | 

### Migrate the application
<a name="migrate-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the application migration strategy. |  | DBA, App owner, Systems administrator | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Switch the application clients over to the new infrastructure. |  | DBA, App owner, Systems administrator | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down the temporary resources. |  | DBA, Systems administrator | 
| Review and validate the project documents. |  | DBA, App owner, Systems administrator | 
| Gather metrics such as time to migrate, percentage of manual versus automated tasks, and cost savings. |  | DBA, App owner, Systems administrator | 
| Close out the project and provide feedback. |  | DBA, App owner, Systems administrator | 

## Related resources
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-dms-resources"></a>

**References**
+ [AWS DMS documentation](https://docs.aws.amazon.com/dms/index.html)
+ [Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/)
+ [Amazon Redshift Pricing](https://aws.amazon.com/redshift/pricing/)

**Tutorials and videos**
+ [Getting Started with AWS DMS](https://aws.amazon.com/dms/getting-started/)
+ [Getting Started with Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html)
+ [Using an Amazon Redshift database as a target for AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Redshift.html)
+ [AWS DMS (video)](https://www.youtube.com/watch?v=zb4GcjEdl8U) 

# Migrate an on-premises Microsoft SQL Server database to Amazon Redshift using AWS SCT data extraction agents
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents"></a>

*Neha Thakur, Amazon Web Services*

## Summary
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents-summary"></a>

This pattern outlines steps for migrating an on-premises Microsoft SQL Server source database to an Amazon Redshift target database by using AWS Schema Conversion Tool (AWS SCT) data extraction agents. An agent is an external program that is integrated with AWS SCT but performs data transformation elsewhere and interacts with other AWS services on your behalf.   

## Prerequisites and limitations
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents-prereqs"></a>

**Prerequisites**
+ A Microsoft SQL Server source database used for the data warehouse workload in an on-premises data center
+ An active AWS account

**Product versions**
+ Microsoft SQL Server version 2008 or later. For the latest list of supported versions, see [AWS SCT documentation](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html). 

## Architecture
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents-architecture"></a>

**technology stack****Source  **
+ An on-premises Microsoft SQL Server database

**technology stack****Target  **
+ Amazon Redshift

**Data migration architecture**

![\[Migrating a SQL Server database to Amazon Redshift by using AWS SCT data extraction agents.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/6975f67a-0705-47b4-a1b8-90aaa2597a04/images/dbff958b-7601-442e-9e23-4d07edd0ccfd.png)


## Tools
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents-tools"></a>
+ [AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html) (AWS SCT) handles heterogeneous database migrations by automatically converting the source database schema and a majority of the custom code to a format that's compatible with the target database. When the source and target databases are very different, you can use an AWS SCT agent to perform additional data transformation. For more information, see [Migrating Data from an On-Premises Data Warehouse to Amazon Redshift](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.dw.html) in the AWS documentation.

## Best practices
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents-best-practices"></a>
+ [Best practices for AWS SCT](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_BestPractices.html)
+ [Best practices for Amazon Redshift ](https://docs.aws.amazon.com/redshift/latest/dg/best-practices.html)

## Epics
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents-epics"></a>

### Prepare for migration
<a name="prepare-for-migration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the source and target database versions and engines. |  | DBA | 
| Identify hardware requirements for the target server instance. |  | DBA, SysAdmin | 
| Identify storage requirements (storage type and capacity). |  | DBA, SysAdmin | 
| Choose the proper instance type (capacity, storage features, network features). |  | DBA, SysAdmin | 
| Identify network access security requirements for the source and target databases. |  | DBA, SysAdmin | 
| Choose an application migration strategy. |  | DBA, SysAdmin, App owner | 

### Configure infrastructure
<a name="configure-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a virtual private cloud (VPC) and subnets. |  | SysAdmin | 
| Create security groups. |  | SysAdmin | 
| Configure and start the Amazon Redshift cluster. |  | SysAdmin | 

### Migrate data
<a name="migrate-data"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Migrate the data using the AWS SCT data extraction agents. |  | DBA | 

### Migrate applications
<a name="migrate-applications"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Follow the chosen application migration strategy. |  | DBA, SysAdmin, App owner | 

### Cut over to the target database
<a name="cut-over-to-the-target-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Switch application clients over to the new infrastructure. |  | DBA, SysAdmin, App owner | 

### Close the project
<a name="close-the-project"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Shut down temporary AWS resources. |  | DBA, SysAdmin | 
| Review and validate the project documents. |  | DBA, SysAdmin, App owner | 
| Gather metrics such as time to migrate, percentage of manual versus automated tasks, and cost savings. |  | DBA, SysAdmin, App owner | 
| Close the project and provide any feedback. |  | DBA, SysAdmin, App owner | 

## Related resources
<a name="migrate-an-on-premises-microsoft-sql-server-database-to-amazon-redshift-using-aws-sct-data-extraction-agents-resources"></a>

**References**
+ [AWS SCT User Guide](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Welcome.html)
+ [Using Data Extraction Agents](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.html)
+ [Amazon Redshift Pricing](https://aws.amazon.com/redshift/pricing/)

**Tutorials and videos**
+ [Getting Started with the AWS Schema Conversion Tool](https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_GettingStarted.html)
+ [Getting Started with Amazon Redshift](http://docs.aws.amazon.com/redshift/latest/gsg/getting-started.html)

# Migrate legacy applications from Oracle Pro\$1C to ECPG
<a name="migrate-legacy-applications-from-oracle-pro-c-to-ecpg"></a>

*Sai Parthasaradhi and Mahesh Balumuri, Amazon Web Services*

## Summary
<a name="migrate-legacy-applications-from-oracle-pro-c-to-ecpg-summary"></a>

Most legacy applications that have embedded SQL code use the Oracle Pro\$1C precompiler to access the database. When you migrate these Oracle databases to Amazon Relational Database Service (Amazon RDS) for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition, you have to convert your application code to a format that’s compatible with the precompiler in PostgreSQL, which is called ECPG. This pattern describes how to convert Oracle Pro\$1C code to its equivalent in PostgreSQL ECPG. 

For more information about Pro\$1C, see the [Oracle documentation](https://docs.oracle.com/cd/E11882_01/appdev.112/e10825/pc_01int.htm#i2415). For a brief introduction to ECPG, see the [Additional information](#migrate-legacy-applications-from-oracle-pro-c-to-ecpg-additional) section.

## Prerequisites and limitations
<a name="migrate-legacy-applications-from-oracle-pro-c-to-ecpg-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ An Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible database
+ An Oracle database running on premises

## Tools
<a name="migrate-legacy-applications-from-oracle-pro-c-to-ecpg-tools"></a>
+ The PostgreSQL packages listed in the next section.
+ [AWS CLI ](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html)– The AWS Command Line Interface (AWS CLI) is an open-source tool for interacting with AWS services through commands in your command-line shell. With minimal configuration, you can run AWS CLI commands that implement functionality equivalent to that provided by the browser-based AWS Management Console from a command prompt.

## Epics
<a name="migrate-legacy-applications-from-oracle-pro-c-to-ecpg-epics"></a>

### Set the build environment on CentOS or RHEL
<a name="set-the-build-environment-on-centos-or-rhel"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install PostgreSQL packages. | Install the required PostgreSQL packages by using the following commands.<pre>yum update -y<br />yum install -y yum-utils<br />rpm -ivh https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm<br />dnf -qy module disable postgresql</pre> | App developer, DevOps engineer | 
| Install the header files and libraries. | Install the `postgresql12-devel` package, which contains header files and libraries, by using the following commands. Install the package in both the development and the runtime environments to avoid errors in the runtime environment.<pre>dnf -y install postgresql12-devel<br />yum install ncompress zip ghostscript jq unzip wget git -y</pre>For the development environment only, also run the following commands.<pre>yum install zlib-devel make -y<br />ln -s /usr/pgsql-12/bin/ecpg /usr/bin/</pre> | App developer, DevOps engineer | 
| Configure the environment path variable. | Set the environment path for PostgreSQL client libraries.<pre>export PATH=$PATH:/usr/pgsql-12/bin</pre> | App developer, DevOps engineer | 
| Install additional software as necessary. | If required, install **pgLoader** as a replacement for **SQL\$1Loader** in Oracle.<pre>wget -O /etc/yum.repos.d/pgloader-ccl.repo https://dl.packager.io/srv/opf/pgloader-ccl/master/installer/el/7.repo<br />yum install pgloader-ccl -y<br />ln -s /opt/pgloader-ccl/bin/pgloader /usr/bin/</pre>If you’re calling any Java applications from Pro\$1C modules, install Java.<pre>yum install java -y</pre>Install **ant** to compile the Java code.<pre>yum install ant -y</pre> | App developer, DevOps engineer | 
| Install the AWS CLI. | Install the AWS CLI to run commands to interact with AWS services such as AWS Secrets Manager and Amazon Simple Storage Service (Amazon S3) from your applications.<pre>cd /tmp/<br />curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"<br />unzip awscliv2.zip<br />./aws/install -i /usr/local/aws-cli -b /usr/local/bin --update</pre> | App developer, DevOps engineer | 
| Identify the programs to be converted. | Identify the applications that you want to convert from Pro\$1C to ECPG. | App developer, App owner | 

### Convert Pro\$1C code to ECPG
<a name="convert-pro-c-code-to-ecpg"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove unwanted headers. | Remove the `include `headers that are not required in PostgreSQL, such as `oci.h`, `oratypes`, and `sqlda`. | App owner, App developer | 
| Update variable declarations. | Add `EXEC SQL` statements for all variable declarations that are used as host variables.Remove the `EXEC SQL VAR` declarations such as the following from your application.<pre>EXEC SQL VAR query IS STRING(2048);</pre> | App developer, App owner | 
| Update ROWNUM functionality. | The `ROWNUM` function isn’t available in PostgreSQL. Replace this with the `ROW_NUMBER` window function in SQL queries.Pro\$1C code:<pre>SELECT SUBSTR(RTRIM(FILE_NAME,'.txt'),12) INTO :gcpclFileseq  <br />FROM   (SELECT FILE_NAME <br />FROM  DEMO_FILES_TABLE <br />WHERE FILE_NAME    LIKE '%POC%' <br />ORDER BY FILE_NAME DESC) FL2 <br />WHERE ROWNUM <=1 ORDER BY ROWNUM;</pre>ECPG code:<pre>SELECT SUBSTR(RTRIM(FILE_NAME,'.txt'),12) INTO :gcpclFileseq  <br />FROM   (SELECT FILE_NAME , ROW_NUMBER() OVER (ORDER BY FILE_NAME DESC) AS ROWNUM<br />FROM  demo_schema.DEMO_FILES_TABLE <br />WHERE FILE_NAME    LIKE '%POC%'<br />ORDER BY FILE_NAME DESC) FL2 <br />WHERE ROWNUM <=1 ORDER BY ROWNUM; </pre> | App developer, App owner | 
| Update function parameters to use alias variables. | In PostgreSQL, function parameters can’t be used as host variables. Overwrite them by using an alias variable.Pro\$1C code:<pre>int processData(int referenceId){<br />  EXEC SQL char col_val[100];<br />  EXEC SQL select column_name INTO :col_val from table_name where col=:referenceId;<br />}</pre>ECPG code:<pre>int processData(int referenceIdParam){<br />  EXEC SQL int referenceId = referenceIdParam;<br />  EXEC SQL char col_val[100];<br />  EXEC SQL select column_name INTO :col_val from table_name where col=:referenceId;<br />}</pre> | App developer, App owner | 
| Update struct types. | Define `struct` types in `EXEC SQL BEGIN` and `END` blocks with `typedef` if the `struct` type variables are used as host variables. If the `struct` types are defined in header (`.h`) files, include the files with `EXEC SQL` include statements.Pro\$1C code:Header file (`demo.h`)<pre>struct s_partition_ranges<br />{<br /> char   sc_table_group[31];<br /> char   sc_table_name[31];<br /> char   sc_range_value[10];<br />}; <br />struct s_partition_ranges_ind<br />{<br />  short    ss_table_group;<br />  short    ss_table_name;<br />  short    ss_range_value;<br />}; </pre>ECPG code:Header file (`demo.h`)<pre>EXEC SQL BEGIN DECLARE SECTION;<br />typedef struct <br />{<br />  char   sc_table_group[31];<br />  char   sc_table_name[31];<br />  char   sc_range_value[10];<br />} s_partition_ranges; <br />typedef struct <br />{<br />  short    ss_table_group;<br />  short    ss_table_name;<br />  short    ss_range_value;<br />} s_partition_ranges_ind; <br />EXEC SQL END DECLARE SECTION;</pre>Pro\$1C file (`demo.pc`)<pre>#include "demo.h"<br />struct s_partition_ranges gc_partition_data[MAX_PART_TABLE] ;<br />struct s_partition_ranges_ind gc_partition_data_ind[MAX_PART_TABLE] ;</pre>ECPG file (`demo.pc`)<pre>exec sql include "demo.h"<br />EXEC SQL BEGIN DECLARE SECTION;<br />s_partition_ranges gc_partition_data[MAX_PART_TABLE] ;<br />s_partition_ranges_ind gc_partition_data_ind[MAX_PART_TABLE] ;<br />EXEC SQL END DECLARE SECTION;</pre> | App developer, App owner | 
| Modify logic to fetch from cursors. | To fetch multiple rows from cursors by using array variables, change the code to use `FETCH FORWARD`.Pro\$1C code:<pre>EXEC SQL char  aPoeFiles[MAX_FILES][FILENAME_LENGTH];<br />EXEC SQL FETCH filename_cursor into :aPoeFiles;</pre>ECPG code:<pre>EXEC SQL char  aPoeFiles[MAX_FILES][FILENAME_LENGTH];<br />EXEC SQL int fetchSize = MAX_FILES;<br />EXEC SQL FETCH FORWARD :fetchSize filename_cursor into :aPoeFiles;</pre> | App developer, App owner | 
| Modify package calls that don't have return values. | Oracle package functions that don’t have return values should be called with an indicator variable. If your application includes multiple functions that have the same name or if the unknown type functions generate runtime errors, typecast the values to the data types.Pro\$1C code:<pre>void ProcessData (char *data , int id)<br />{        <br />        EXEC SQL EXECUTE<br />               BEGIN<br />                  pkg_demo.process_data (:data, :id);                                                                                    <br />               END;<br />       END-EXEC;<br />}</pre>ECPG code:<pre>void ProcessData (char *dataParam, int idParam )<br />{<br />        EXEC SQL char *data = dataParam;<br />        EXEC SQL int id = idParam;<br />        EXEC SQL short rowInd;<br />        EXEC SQL short rowInd = 0;<br />        EXEC SQL SELECT pkg_demo.process_data (<br />                       inp_data => :data::text,<br />                       inp_id => :id<br />               ) INTO :rowInd;<br />}</pre> | App developer, App owner | 
| Rewrite SQL\$1CURSOR variables. | Rewrite the `SQL_CURSOR` variable and its implementation.Pro\$1C code:<pre>/* SQL Cursor */<br />SQL_CURSOR      demo_cursor;<br />EXEC SQL ALLOCATE :demo_cursor;<br />EXEC SQL EXECUTE<br />  BEGIN<br />      pkg_demo.get_cursor(     <br />        demo_cur=>:demo_cursor<br />      );<br />  END;<br />END-EXEC;</pre>ECPG code:<pre>EXEC SQL DECLARE demo_cursor CURSOR FOR SELECT<br />         * from<br />    pkg_demo.open_filename_rc(<br />            demo_cur=>refcursor<br />          ) ;<br />EXEC SQL char open_filename_rcInd[100]; <br /># As the below function returns cursor_name as <br /># return we need to use char[] type as indicator. <br />EXEC SQL SELECT pkg_demo.get_cursor (<br />        demo_cur=>'demo_cursor'<br />    ) INTO :open_filename_rcInd;</pre> | App developer, App owner | 
| Apply common migration patterns. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-legacy-applications-from-oracle-pro-c-to-ecpg.html) | App developer, App owner | 
| Enable debugging, if required.  | To run the ECPG program in debug mode, add the following command inside the main function block.<pre>ECPGdebug(1, stderr); </pre> | App developer, App owner | 

### Compile ECPG programs
<a name="compile-ecpg-programs"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an executable file for ECPG. | If you have an embedded SQL C source file named `prog1.pgc`, you can create an executable program by using the following sequence of commands.<pre>ecpg prog1.pgc<br />cc -I/usr/local/pgsql/include -c prog1.c<br />cc -o prog1 prog1.o -L/usr/local/pgsql/lib -lecpg</pre> | App developer, App owner | 
| Create a make file for compilation. | Create a make file to compile the ECPG program, as shown in the following sample file.<pre>CFLAGS ::= $(CFLAGS) -I/usr/pgsql-12/include -g -Wall<br />LDFLAGS ::= $(LDFLAGS) -L/usr/pgsql-12/lib -Wl,-rpath,/usr/pgsql-12/lib<br />LDLIBS ::= $(LDLIBS) -lecpg<br />PROGRAMS = test <br />.PHONY: all clean<br />%.c: %.pgc<br />      ecpg $<<br />all: $(PROGRAMS)<br />clean:<br />    rm -f $(PROGRAMS) $(PROGRAMS:%=%.c) $(PROGRAMS:%=%.o)</pre> | App developer, App owner | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the code. | Test the converted application code to make sure that it functions correctly. | App developer, App owner, Test engineer | 

## Related resources
<a name="migrate-legacy-applications-from-oracle-pro-c-to-ecpg-resources"></a>
+ [ECPG - Embedded SQL in C](https://www.postgresql.org/docs/current/static/ecpg.html) (PostgreSQL documentation)
+ [Error Handling](https://www.postgresql.org/docs/12/ecpg-errors.html) (PostgreSQL documentation)
+ [Why Use the Oracle Pro\$1C/C\$1\$1 Precompiler](https://docs.oracle.com/cd/E11882_01/appdev.112/e10825/pc_01int.htm#i2415) (Oracle documentation)

## Additional information
<a name="migrate-legacy-applications-from-oracle-pro-c-to-ecpg-additional"></a>

PostgreSQL has an embedded SQL precompiler, ECPG, which is equivalent to the Oracle Pro\$1C precompiler. ECPG converts C programs that have embedded SQL statements to standard C code by replacing the SQL calls with special function calls. The output files can then be processed with any C compiler tool chain.

**Input and output files**

ECPG converts each input file you specify on the command line to the corresponding C output file. If an input file name doesn’t have a file extension, .pgc is assumed. The file's extension is replaced by `.c` to construct the output file name. However, you can override the default output file name by using the `-o` option.

If you use a dash (`-`) as the input file name, ECPG reads the program from standard input and writes to standard output, unless you override that by using the `-o` option.

**Header files**

When the PostgreSQL compiler compiles the pre-processed C code files, it looks for the ECPG header files in the PostgreSQL `include` directory. Therefore, you might have to use the `-I` option to point the compiler to the correct directory (for example, `-I/usr/local/pgsql/include`).

**Libraries**

Programs that use C code with embedded SQL have to be linked against the `libecpg` library. For example, you can use the linker options` -L/usr/local/pgsql/lib -lecpg`.

Converted ECPG applications call functions in the `libpq` library through the embedded SQL library (`ecpglib`), and communicate with the PostgreSQL server by using the standard frontend/backend protocol.

# Migrate virtual generated columns from Oracle to PostgreSQL
<a name="migrate-virtual-generated-columns-from-oracle-to-postgresql"></a>

*Veeranjaneyulu Grandhi, Rajesh Madiwale, and Ramesh Pathuri, Amazon Web Services*

## Summary
<a name="migrate-virtual-generated-columns-from-oracle-to-postgresql-summary"></a>

In version 11 and earlier, PostgreSQL doesn’t provide a feature that is directly equivalent to an Oracle virtual column. Handling virtual generated columns while migrating from Oracle Database to PostgreSQL version 11 or earlier is difficult for two reasons: 
+ Virtual columns aren’t visible during migration.
+ PostgreSQL doesn't support the `generate` expression before version 12.

However, there are workarounds to emulate similar functionality. When you use AWS Database Migration Service (AWS DMS) to migrate data from Oracle Database to PostgreSQL version 11 and earlier, you can use trigger functions to populate the values in virtual generated columns. This pattern provides examples of Oracle Database and PostgreSQL code that you can use for this purpose. On AWS, you can use Amazon Relational Database Service (Amazon RDS) for PostgreSQL or Amazon Aurora PostgreSQL-Compatible Edition for your PostgreSQL database.

Starting with PostgreSQL version 12, generated columns are supported. Generated columns can either be calculated from other column values on the fly, or calculated and stored. [PostgreSQL generated columns](https://www.postgresql.org/docs/12/ddl-generated-columns.html) are similar to Oracle virtual columns.

## Prerequisites and limitations
<a name="migrate-virtual-generated-columns-from-oracle-to-postgresql-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A source Oracle database 
+ Target PostgreSQL databases (on Amazon RDS for PostgreSQL or Aurora PostgreSQL-Compatible)
+ [PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) coding expertise

**Limitations**
+ Applies only to PostgreSQL versions before version 12. 
+ Applies to Oracle Database version 11g or later.
+ Virtual columns are not supported in data migration tools.
+ Applies only to columns defined in the same table.
+ If a virtual generated column refers to a deterministic user-defined function, it cannot be used as a partitioning key column.
+ The output of the expression must be a scalar value. It cannot return an Oracle supplied datatype, a user-defined type, `LOB`, or `LONG RAW`.
+ Indexes that are defined against virtual columns are equivalent to function-based indexes in PostgreSQL.
+ Table statistics must be gathered.

## Tools
<a name="migrate-virtual-generated-columns-from-oracle-to-postgresql-tools"></a>
+ [pgAdmin 4](https://www.pgadmin.org/) is an open source management tool for PostgreSQL. This tool provides a graphical interface that simplifies the creation, maintenance, and use of database objects.
+ [Oracle SQL Developer](https://www.oracle.com/database/sqldeveloper/) is a free, integrated development environment for working with SQL in Oracle databases in both traditional and cloud deployments. 

## Epics
<a name="migrate-virtual-generated-columns-from-oracle-to-postgresql-epics"></a>

### Create source and target database tables
<a name="create-source-and-target-database-tables"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a source Oracle Database table. | In Oracle Database, create a table with virtual generated columns by using the following statement.<pre>CREATE TABLE test.generated_column<br />( CODE NUMBER,<br />STATUS VARCHAR2(12) DEFAULT 'PreOpen',<br />FLAG CHAR(1) GENERATED ALWAYS AS (CASE UPPER(STATUS) WHEN 'OPEN' THEN 'N' ELSE 'Y' END) VIRTUAL VISIBLE<br />);</pre>In this source table, the data in the `STATUS` column is migrated through AWS DMS to the target database. The `FLAG` column, however, is populated by using `generate by` functionality, so this column isn’t visible to AWS DMS during migration. To implement the functionality of `generated by`, you must use triggers and functions in the target database to populate the values in the `FLAG` column, as shown in the next epic. | DBA, App developer | 
| Create a target PostgreSQL table on AWS. | Create a PostgreSQL table on AWS by using the following statement.<pre>CREATE TABLE test.generated_column<br />(<br />    code integer not null,<br />    status character varying(12) not null ,<br />    flag character(1)<br />);</pre>In this table, the `status` column is a standard column. The `flag` column will be a generated column based on the data in the `status` column. | DBA, App developer | 

### Create a trigger function to handle the virtual column in PostgreSQL
<a name="create-a-trigger-function-to-handle-the-virtual-column-in-postgresql"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a PostgreSQL trigger. | In PostgreSQL, create a trigger.<pre>CREATE TRIGGER tgr_gen_column<br />AFTER INSERT OR UPDATE OF status ON test.generated_column<br />FOR EACH ROW <br />EXECUTE FUNCTION test.tgf_gen_column();</pre> | DBA, App developer | 
| Create a PostgreSQL trigger function. | In PostgreSQL, create a function for the trigger. This function populates a virtual column that is inserted or updated by the application or AWS DMS, and validates the data.<pre>CREATE OR REPLACE FUNCTION test.tgf_gen_column() RETURNS trigger AS $VIRTUAL_COL$<br />BEGIN<br />IF (TG_OP = 'INSERT') THEN<br />IF (NEW.flag IS NOT NULL) THEN<br />RAISE EXCEPTION 'ERROR: cannot insert into column "flag"' USING DETAIL = 'Column "flag" is a generated column.';<br />END IF;<br />END IF;<br />IF (TG_OP = 'UPDATE') THEN<br />IF (NEW.flag::VARCHAR != OLD.flag::varchar) THEN<br />RAISE EXCEPTION 'ERROR: cannot update column "flag"' USING DETAIL = 'Column "flag" is a generated column.';<br />END IF;<br />END IF;<br />IF TG_OP IN ('INSERT','UPDATE') THEN<br />IF (old.flag is NULL) OR (coalesce(old.status,'') != coalesce(new.status,'')) THEN<br />UPDATE test.generated_column<br />SET flag = (CASE UPPER(status) WHEN 'OPEN' THEN 'N' ELSE 'Y' END)<br />WHERE code = new.code;<br />END IF;<br />END IF;<br />RETURN NEW;<br />END<br />$VIRTUAL_COL$ LANGUAGE plpgsql;</pre> | DBA, App developer | 

### Test data migration by using AWS DMS
<a name="test-data-migration-by-using-aws-dms"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a replication instance. | To create a replication instance, follow the [instructions](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Creating.html) in the AWS DMS documentation. The replication instance should be in the same virtual private cloud (VPC) as your source and target databases. | DBA, App developer | 
| Create source and target endpoints. | To create the endpoints, follow the [instructions](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.Creating.html) in the AWS DMS documentation. | DBA, App developer | 
| Test the endpoint connections. | You can test the endpoint connections by specifying the VPC and replication instance and choosing **Run test**. | DBA, App developer | 
| Create and start a full load task. | For instructions, see [Creating a Task](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.Creating.html) and [Full-load task settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.FullLoad.html) in the AWS DMS documentation. | DBA, App developer | 
| Validate the data for the virtual column. | Compare the data in the virtual column in the source and target databases. You can validate the data manually or write a script for this step. | DBA, App developer | 

## Related resources
<a name="migrate-virtual-generated-columns-from-oracle-to-postgresql-resources"></a>
+ [Getting started with AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_GettingStarted.html) (AWS DMS documentation)
+ [Using an Oracle database as a source for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html) (AWS DMS documentation)
+ [Using a PostgreSQL database as a target for AWS DMS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.PostgreSQL.html) (AWS DMS documentation)
+ [Generated columns in PostgreSQL](https://www.postgresql.org/docs/12/ddl-generated-columns.html) (PostgreSQL documentation)
+ [Trigger functions](https://www.postgresql.org/docs/12/plpgsql-trigger.html) (PostgreSQL documentation)
+ [Virtual columns](https://docs.oracle.com/database/121/SQLRF/statements_7002.htm#SQLRF01402) in Oracle Database (Oracle documentation)

# Set up Oracle UTL\$1FILE functionality on Aurora PostgreSQL-Compatible
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible"></a>

*Rakesh Raghav and anuradha chintha, Amazon Web Services*

## Summary
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible-summary"></a>

As part of your migration journey from Oracle to Amazon Aurora PostgreSQL-Compatible Edition on the Amazon Web Services (AWS) Cloud, you might encounter multiple challenges. For example, migrating code that relies on the Oracle `UTL_FILE` utility is always a challenge. In Oracle PL/SQL, the `UTL_FILE` package is used for file operations, such as read and write, in conjunction with the underlying operating system. The `UTL_FILE` utility works for both server and client machine systems. 

Amazon Aurora PostgreSQL-Compatible is a managed database offering. Because of this, it isn't possible to access files on the database server. This pattern walks you through the integration of Amazon Simple Storage Service (Amazon S3) and Amazon Aurora PostgreSQL-Compatible to achieve a subset of `UTL_FILE` functionality. Using this integration, we can create and consume files without using third-party extract, transform, and load (ETL) tools or services.

Optionally, you can set up Amazon CloudWatch monitoring and Amazon SNS notifications.

We recommend thoroughly testing this solution before implementing it in a production environment.

## Prerequisites and limitations
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ AWS Database Migration Service (AWS DMS) expertise
+ Expertise in PL/pgSQL coding
+ An Amazon Aurora PostgreSQL-Compatible cluster
+ An S3 bucket

**Limitations **

This pattern doesn't provide the functionality to act as a replacement for the Oracle `UTL_FILE` utility. However, the steps and sample code can be enhanced further to achieve your database modernization goals.

**Product versions**
+ Amazon Aurora PostgreSQL-Compatible Edition 11.9

## Architecture
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible-architecture"></a>

**Target technology stack**
+ Amazon Aurora PostgreSQL-Compatible
+ Amazon CloudWatch
+ Amazon Simple Notification Service (Amazon SNS)
+ Amazon S3

**Target architecture **

The following diagram shows a high-level representation of the solution.

![\[Data files are uploaded to an S3 bucket, processed using the aws_s3 extension, and sent to the Aurora instance.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3aeecd46-1f87-41f9-a9cd-f8181f92e83f/images/4a6c5f5c-58fb-4355-b243-d09a15c1cec6.png)


1. Files are uploaded from the application into the S3 bucket.

1. The `aws_s3` extension accesses the data, using PL/pgSQL, and uploads the data to Aurora PostgreSQL-Compatible.

## Tools
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible-tools"></a>
+ [Amazon Aurora PostgreSQL-Compatible](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) – Amazon Aurora PostgreSQL-Compatible Edition is a fully managed, PostgreSQL-compatible, and ACID-compliant relational database engine. It combines the speed and reliability of high-end commercial databases with the cost-effectiveness of open-source databases.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With only one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) – Amazon CloudWatch monitors Amazon S3 resources and use.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. In this pattern, Amazon S3 provides a storage layer to receive and store files for consumption and transmission to and from the Aurora PostgreSQL-Compatible cluster.
+ [aws\$1s3](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html#aws_s3.table_import_from_s3) – The `aws_s3` extension integrates Amazon S3 and Aurora PostgreSQL-Compatible.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) coordinates and manages the delivery or sending of messages between publishers and clients. In this pattern, Amazon SNS is used to send notifications.
+ [pgAdmin](https://www.pgadmin.org/docs/) – pgAdmin is an open-source management tool for Postgres. pgAdmin 4 provides a graphical interface for creating, maintaining, and using database objects.

**Code**

To achieve the required functionality, the pattern creates multiple functions with naming similar to `UTL_FILE`. The *Additional information* section contains the code base for these functions.

In the code, replace `testaurorabucket` with the name of your test S3 bucket. Replace `us-east-1` with the AWS Region where your test S3 bucket is located.

## Epics
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible-epics"></a>

### Integrate Amazon S3 and Aurora PostgreSQL-Compatible
<a name="integrate-amazon-s3-and-aurora-postgresql-compatible"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up IAM policies. | Create AWS Identity and Access Management (IAM) policies that grant access to the S3 bucket and objects in it. For the code, see the *Additional information* section. | AWS administrator, DBA | 
| Add Amazon S3 access roles to Aurora PostgreSQL. | Create two IAM roles: one role for read and one role for write access to Amazon S3. Attach the two roles to the Aurora PostgreSQL-Compatible cluster: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible.html)For more information, see the Aurora PostgreSQL-Compatible documentation on [importing](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PostgreSQL.S3Import.html) and [exporting](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html) data to Amazon S3. | AWS administrator, DBA | 

### Set up the extensions in Aurora PostgreSQL-Compatible
<a name="set-up-the-extensions-in-aurora-postgresql-compatible"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the aws\$1commons extension. | The `aws_commons` extension is a dependency of the `aws_s3` extension. | DBA, Developer | 
| Create the aws\$1s3 extension. | The `aws_s3` extension interacts with Amazon S3. | DBA, Developer | 

### Validate Amazon S3 and Aurora PostgreSQL-Compatible integration
<a name="validate-amazon-s3-and-aurora-postgresql-compatible-integration"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test importing files from Amazon S3 into Aurora PostgreSQL. | To test importing files into Aurora PostgreSQL-Compatible, create a sample CSV file and upload it into the S3 bucket. Create a table definition based on the CSV file, and load the file into the table by using the `aws_s3.table_import_from_s3` function. | DBA, Developer | 
| Test exporting files from Aurora PostgreSQL to Amazon S3. | To test exporting files from Aurora PostgreSQL-Compatible, create a test table, populate it with data, and then export the data by using the `aws_s3.query_export_to_s3` function. | DBA, Developer | 

### To mimic the UTL\$1FILE utility, create wrapper functions
<a name="to-mimic-the-utl_file-utility-create-wrapper-functions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the utl\$1file\$1utility schema. | The schema keeps the wrapper functions together. To create the schema, run the following command.<pre>CREATE SCHEMA utl_file_utility;</pre> | DBA, Developer | 
| Create the file\$1type type. | To create the `file_type` type, use the following code.<pre>CREATE TYPE utl_file_utility.file_type AS (<br />    p_path character varying(30),<br />    p_file_name character varying<br />);<br /><br /><br /></pre> | DBA/Developer | 
| Create the init function. | The `init` function initializes common variable such as `bucket` or `region`. For the code, see the *Additional information* section. | DBA/Developer | 
| Create the wrapper functions. | Create the wrapper functions `fopen`, `put_line`, and `fclose`. For code, see the *Additional information* section. | DBA, Developer | 

### Test the wrapper functions
<a name="test-the-wrapper-functions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the wrapper functions in write mode. | To test the wrapper functions in write mode, use the code provided in the *Additional information* section. | DBA, Developer | 
| Test the wrapper functions in append mode. | To test the wrapper functions in append mode, use the code provide in the *Additional information* section. | DBA, Developer | 

## Related resources
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible-resources"></a>
+ [Amazon S3 integration](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PostgreSQL.S3Import.html)
+ [Amazon S3](https://aws.amazon.com/s3/)
+ [Aurora](https://aws.amazon.com/rds/aurora/?nc2=h_ql_prod_db_aa&aurora-whats-new.sort-by=item.additionalFields.postDateTime&aurora-whats-new.sort-order=desc)
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/)
+ [Amazon SNS](https://aws.amazon.com/sns/?nc2=h_ql_prod_ap_sns&whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc)

## Additional information
<a name="set-up-oracle-utl_file-functionality-on-aurora-postgresql-compatible-additional"></a>

**Set up IAM policies**

Create the following policies.


| 
| 
| Policy name | JSON | 
| --- |--- |
| S3IntRead | <pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "S3integrationtest",<br />            "Effect": "Allow",<br />            "Action": [<br />                "s3:GetObject",<br />                "s3:ListBucket"<br />            ],<br />            "Resource": [<br />         "arn:aws:s3:::testaurorabucket/*",<br />         "arn:aws:s3:::testaurorabucket"<br />            ]<br />        }<br />    ]<br />}</pre> | 
| S3IntWrite | <pre>{<br />    "Version": "2012-10-17",		 	 	 <br />    "Statement": [<br />        {<br />            "Sid": "S3integrationtest",<br />            "Effect": "Allow",<br />            "Action": [<br />                "s3:PutObject",                <br />                "s3:ListBucket"<br />            ],<br />            "Resource": [                "arn:aws:s3:::testaurorabucket/*",                "arn:aws:s3:::testaurorabucket"<br />            ]<br />        }<br />    ]<br />}</pre> | 

**Create the init function**

To initialize common variables, such as `bucket` or `region`, create the `init` function by using the following code.

```
CREATE OR REPLACE FUNCTION utl_file_utility.init(
    )
    RETURNS void
    LANGUAGE 'plpgsql'

    COST 100
    VOLATILE 
AS $BODY$
BEGIN
      perform set_config
      ( format( '%s.%s','UTL_FILE_UTILITY', 'region' )
      , 'us-east-1'::text
      , false );

      perform set_config
      ( format( '%s.%s','UTL_FILE_UTILITY', 's3bucket' )
      , 'testaurorabucket'::text
      , false );
END;
$BODY$;
```

**Create the wrapper functions**

Create the `fopen`, `put_line`, and `fclose` wrapper functions.

*fopen*

```
CREATE OR REPLACE FUNCTION utl_file_utility.fopen(
    p_file_name character varying,
    p_path character varying,
    p_mode character DEFAULT 'W'::bpchar,
    OUT p_file_type utl_file_utility.file_type)
    RETURNS utl_file_utility.file_type
    LANGUAGE 'plpgsql'

    COST 100
    VOLATILE 
AS $BODY$
declare
    v_sql character varying;
    v_cnt_stat integer;
    v_cnt integer;
    v_tabname character varying;
    v_filewithpath character varying;
    v_region character varying;
    v_bucket character varying;

BEGIN
    /*initialize common variable */
    PERFORM utl_file_utility.init();
    v_region := current_setting( format( '%s.%s', 'UTL_FILE_UTILITY', 'region' ) );
    v_bucket :=  current_setting( format( '%s.%s', 'UTL_FILE_UTILITY', 's3bucket' ) );
    
    /* set tabname*/
    v_tabname := substring(p_file_name,1,case when strpos(p_file_name,'.') = 0 then length(p_file_name) else strpos(p_file_name,'.') - 1 end );
    v_filewithpath := case when NULLif(p_path,'') is null then p_file_name else concat_ws('/',p_path,p_file_name) end ;
    raise notice 'v_bucket %, v_filewithpath % , v_region %', v_bucket,v_filewithpath, v_region;
    
    /* APPEND MODE HANDLING; RETURN EXISTING FILE DETAILS IF PRESENT ELSE CREATE AN EMPTY FILE */
    IF p_mode = 'A' THEN
        v_sql := concat_ws('','create temp table if not exists ', v_tabname,' (col1 text)');
        execute v_sql;

        begin
        PERFORM aws_s3.table_import_from_s3 
            ( v_tabname, 
            '',  
            'DELIMITER AS ''#''', 
            aws_commons.create_s3_uri 
            (     v_bucket, 
                v_filewithpath ,
                v_region)
            );
        exception
            when others then
             raise notice 'File load issue ,%',sqlerrm;
             raise;
        end;
        execute concat_ws('','select count(*) from ',v_tabname) into v_cnt;

        IF v_cnt > 0 
        then
            p_file_type.p_path := p_path;
            p_file_type.p_file_name := p_file_name;
        else         
            PERFORM aws_s3.query_export_to_s3('select ''''', 
                            aws_commons.create_s3_uri(v_bucket, v_filewithpath, v_region)            
                              );

            p_file_type.p_path := p_path;
            p_file_type.p_file_name := p_file_name;        
        end if;
        v_sql := concat_ws('','drop table ', v_tabname);        
        execute v_sql;            
    ELSEIF p_mode = 'W' THEN
            PERFORM aws_s3.query_export_to_s3('select ''''', 
                            aws_commons.create_s3_uri(v_bucket, v_filewithpath, v_region)            
                              );
            p_file_type.p_path := p_path;
            p_file_type.p_file_name := p_file_name;
    END IF;    
    
EXCEPTION
        when others then
            p_file_type.p_path := p_path;
            p_file_type.p_file_name := p_file_name;
            raise notice 'fopenerror,%',sqlerrm;
            raise;
END;
$BODY$;
```

*put\$1line*

```
CREATE OR REPLACE FUNCTION utl_file_utility.put_line(
    p_file_name character varying,
    p_path character varying,
    p_line text,
    p_flag character DEFAULT 'W'::bpchar)
    RETURNS boolean
    LANGUAGE 'plpgsql'

    COST 100
    VOLATILE 
AS $BODY$
/**************************************************************************
* Write line, p_line in windows format to file, p_fp - with carriage return
* added before new line.
**************************************************************************/
declare
    v_sql varchar;
    v_ins_sql varchar;
    v_cnt INTEGER;
    v_filewithpath character varying;
    v_tabname  character varying;
    v_bucket character varying;
    v_region character varying;    

BEGIN
 PERFORM utl_file_utility.init();

/* check if temp table already exist */

 v_tabname := substring(p_file_name,1,case when strpos(p_file_name,'.') = 0 then length(p_file_name) else strpos(p_file_name,'.') - 1 end );

 v_sql := concat_ws('','select count(1) FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace where n.nspname like ''pg_temp_%''' 
                         ,' AND pg_catalog.pg_table_is_visible(c.oid) AND Upper(relname) = Upper( '''
                         ,  v_tabname ,''' ) ');
  
 execute v_sql into v_cnt;
  
  IF v_cnt = 0 THEN
         v_sql := concat_ws('','create temp table ',v_tabname,' (col text)');
        execute v_sql;
        /* CHECK IF APPEND MODE */
        IF upper(p_flag) = 'A' THEN
            PERFORM utl_file_utility.init();                        
            v_region := current_setting( format( '%s.%s', 'UTL_FILE_UTILITY', 'region' ) );
            v_bucket :=  current_setting( format( '%s.%s', 'UTL_FILE_UTILITY', 's3bucket' ) );
            
            /* set tabname*/            
            v_filewithpath := case when NULLif(p_path,'') is null then p_file_name else concat_ws('/',p_path,p_file_name) end ;            
            
            begin
               PERFORM aws_s3.table_import_from_s3 
                     ( v_tabname, 
                          '',  
                       'DELIMITER AS ''#''', 
                        aws_commons.create_s3_uri 
                           ( v_bucket, 
                               v_filewithpath, 
                               v_region    )
                    );
            exception
                when others then
                    raise notice  'Error Message : %',sqlerrm;
                    raise;
            end;    
        END IF;    
    END IF;
    /* INSERT INTO TEMP TABLE */              
    v_ins_sql := concat_ws('','insert into ',v_tabname,' values(''',p_line,''')');
    execute v_ins_sql;
    RETURN TRUE;
    exception
            when others then
                raise notice  'Error Message : %',sqlerrm;
                raise;
END;
$BODY$;
```

*fclose*

```
CREATE OR REPLACE FUNCTION utl_file_utility.fclose(
    p_file_name character varying,
    p_path character varying)
    RETURNS boolean
    LANGUAGE 'plpgsql'

    COST 100
    VOLATILE 
AS $BODY$
DECLARE
    v_filewithpath character varying;
    v_bucket character varying;
    v_region character varying;
    v_tabname character varying;
    v_sql character varying;
BEGIN
      PERFORM utl_file_utility.init();
  
    v_region := current_setting( format( '%s.%s', 'UTL_FILE_UTILITY', 'region' ) );
    v_bucket :=  current_setting( format( '%s.%s', 'UTL_FILE_UTILITY', 's3bucket' ) );

    v_tabname := substring(p_file_name,1,case when strpos(p_file_name,'.') = 0 then length(p_file_name) else strpos(p_file_name,'.') - 1 end );
    v_filewithpath := case when NULLif(p_path,'') is null then p_file_name else concat_ws('/',p_path,p_file_name) end ;

    raise notice 'v_bucket %, v_filewithpath % , v_region %', v_bucket,v_filewithpath, v_region ;
    
    /* exporting to s3 */
    perform aws_s3.query_export_to_s3
        (concat_ws('','select * from ',v_tabname,'  order by ctid asc'), 
            aws_commons.create_s3_uri(v_bucket, v_filewithpath, v_region)
        );
    v_sql := concat_ws('','drop table ', v_tabname);
    execute v_sql;    
    RETURN TRUE;
EXCEPTION 
       when others then
     raise notice 'error fclose %',sqlerrm;
     RAISE;
END;
$BODY$;
```

**Test your setup and wrapper functions**

Use the following anonymous code blocks to test your setup.

*Test the write mode*

The following code writes a file named `s3inttest` in the S3 bucket.

```
do $$
declare
l_file_name varchar := 's3inttest' ;
l_path varchar := 'integration_test' ;
l_mode char(1) := 'W';
l_fs utl_file_utility.file_type ;
l_status boolean;

begin
select * from
utl_file_utility.fopen( l_file_name, l_path , l_mode ) into l_fs ;
raise notice 'fopen : l_fs : %', l_fs;

select * from
utl_file_utility.put_line( l_file_name, l_path ,'this is test file:in s3bucket: for test purpose', l_mode ) into l_status ;
raise notice 'put_line : l_status %', l_status;

select * from utl_file_utility.fclose( l_file_name , l_path ) into l_status ;
raise notice 'fclose : l_status %', l_status;

end;
$$
```

*Test the append mode*

The following code appends lines onto the `s3inttest` file that was created in the previous test.

```
do $$
declare
l_file_name varchar := 's3inttest' ;
l_path varchar := 'integration_test' ;
l_mode char(1) := 'A';
l_fs utl_file_utility.file_type ;
l_status boolean;

begin
select * from
utl_file_utility.fopen( l_file_name, l_path , l_mode ) into l_fs ;
raise notice 'fopen : l_fs : %', l_fs;


select * from
utl_file_utility.put_line( l_file_name, l_path ,'this is test file:in s3bucket: for test purpose : append 1', l_mode ) into l_status ;
raise notice 'put_line : l_status %', l_status;

select * from
utl_file_utility.put_line( l_file_name, l_path ,'this is test file:in s3bucket : for test purpose : append 2', l_mode ) into l_status ;
raise notice 'put_line : l_status %', l_status;

select * from utl_file_utility.fclose( l_file_name , l_path ) into l_status ;
raise notice 'fclose : l_status %', l_status;

end;
$$
```

**Amazon SNS notifications**

Optionally, you can set up Amazon CloudWatch monitoring and Amazon SNS notifications on the S3 bucket. For more information, see [Monitoring Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/monitoring-overview.html) and [Setting up Amazon SNS Notifications](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html).

# Validate database objects after migrating from Oracle to Amazon Aurora PostgreSQL
<a name="validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql"></a>

*Venkatramana Chintha and Eduardo Valentim, Amazon Web Services*

## Summary
<a name="validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql-summary"></a>

This pattern describes a step-by-step approach to validate objects after migrating an Oracle database to Amazon Aurora PostgreSQL-Compatible Edition.

This pattern outlines usage scenarios and steps for database object validation; for more detailed information, see [Validating database objects after migration using AWS SCT and AWS DMS](https://aws.amazon.com/blogs/database/validating-database-objects-after-migration-using-aws-sct-and-aws-dms/) on the [AWS Database blog](https://aws.amazon.com/blogs/).

## Prerequisites and limitations
<a name="validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ An on-premises Oracle database that was migrated to an Aurora PostgreSQL-Compatible database. 
+ Sign-in credentials that have the [AmazonRDSDataFullAccess](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/query-editor.html) policy applied, for the Aurora PostgreSQL-Compatible database. 
+ This pattern uses the [query editor for Aurora Serverless DB clusters](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/query-editor.html), which is available in the Amazon Relational Database Service (Amazon RDS) console. However, you can use this pattern with any other query editor. 

**Limitations**
+ Oracle SYNONYM objects are not available in PostgreSQL but can be partially validated through **views** or SET search\$1path queries.
+ The Amazon RDS query editor is available only in [certain AWS Regions and for certain MySQL and PostgreSQL versions](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/query-editor.html).

## Architecture
<a name="validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql-architecture"></a>

 

![\[Database migration workflow showing on-premises Oracle to AWSAurora PostgreSQL via client program and validation scripts.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7c028960-6dea-46ad-894d-e42cefd50c03/images/be5f8ae3-f5af-4c5e-9440-09ab410beaa1.png)


 

## Tools
<a name="validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql-tools"></a>

**Tools**
+ [Amazon Aurora PostgreSQL-Compatible Edition](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.AuroraPostgreSQL.html) – Aurora PostgreSQL-Compatible is a fully managed, PostgreSQL-compatible, and ACID-compliant relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases.
+ [Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html) – Amazon Relational Database Service (Amazon RDS) makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.
+ [Query Editor for Aurora Severless](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/query-editor.html) – Query editor helps you run SQL queries in the Amazon RDS console. You can run any valid SQL statement on the Aurora Serverless DB cluster, including data manipulation and data definition statements.

To validate the objects, use the full scripts in the "Object validation scripts" file in the "Attachments" section. Use the following table for reference.


| 
| 
| Oracle object | Script to use | 
| --- |--- |
| Packages | Query 1 | 
| Tables | Query  3 | 
| Views | Query  5 | 
| Sequences | Query  7 | 
| Triggers |  Query  9 | 
| Primary keys | Query  11 | 
| Indexes | Query  13 | 
| Check constraints | Query  15 | 
| Foreign keys  | Query 17  | 


| 
| 
| PostgreSQL object | Script to use | 
| --- |--- |
| Packages | Query 2 | 
| Tables | Query 4 | 
| Views | Query 6 | 
| Sequences | Query 8 | 
| Triggers | Query 10 | 
| Primary keys | Query  12 | 
| Indexes | Query  14 | 
| Check constraints | Query  16 | 
| Foreign keys | Query  18 | 

## Epics
<a name="validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql-epics"></a>

### Validate objects in the source Oracle database
<a name="validate-objects-in-the-source-oracle-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run the "packages" validation query in the source Oracle database.  | Download and open the "Object validation scripts" file from the "Attachments" section. Connect to the source Oracle database through your client program. Run the "Query 1" validations script from the "Object validation scripts" file. Important: Enter your Oracle user name instead of "your\$1schema" in the queries. Make sure you record your query results. | Developer, DBA | 
| Run the "tables" validation query.  | Run the "Query 3" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "views" validation query.  | Run the "Query 5" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "sequences" count validation.  | Run the "Query 7" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "triggers" validation query.  | Run the "Query 9" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "primary keys" validation query.  | Run the "Query 11" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "indexes" validation query.  | Run the "Query 13" validation script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "check constraints" validation query.  | Run the "Query 15" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "foreign keys" validation query.  | Run the "Query 17" validation script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 

### Validate objects in the target Aurora PostgreSQL-Compatible database
<a name="validate-objects-in-the-target-aurora-postgresql-compatible-database"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Connect to the target Aurora PostgreSQL-Compatible database by using the query editor. | Sign in to the AWS Management Console and open the Amazon RDS console. In the upper-right corner, choose the AWS Region in which you created the Aurora PostgreSQL-Compatible database. In the navigation pane, choose "Databases," and choose the target Aurora PostgreSQL-Compatible database. In "Actions," choose "Query." Important: If you haven't connected to the database before, the "Connect to database" page opens. You then need to enter your database information, such as user name and password. | Developer, DBA | 
| Run the "packages" validation query. | Run the "Query 2" script from the "Object validation scripts" file in the "Attachments" section. Make sure you record your query results. | Developer, DBA | 
| Run the "tables" validation query.  | Return to the query editor for the Aurora PostgreSQL-Compatible database, and run the "Query 4" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "views" validation query.  | Return to the query editor for the Aurora PostgreSQL-Compatible database, and run the "Query 6" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "sequences" count validation.  | Return to the query editor for the Aurora PostgreSQL-Compatible database, and run the "Query 8" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "triggers" validation query.  | Return to the query editor for the Aurora PostgreSQL-Compatible database, and run the "Query 10" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "primary keys" validation query.  | Return to the query editor for the Aurora PostgreSQL-Compatible database, and run the "Query 12" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "indexes" validation query.  | Return to the query editor for the Aurora PostgreSQL-Compatible database, and run the "Query 14" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "check constraints" validation query.  | Run the "Query 16" script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 
| Run the "foreign keys" validation query.  | Run the "Query 18" validation script from the "Object validation scripts" file. Make sure you record your query results. | Developer, DBA | 

### Compare source and target database validation records
<a name="compare-source-and-target-database-validation-records"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Compare and validate both query results.  | Compare the query results of the Oracle and Aurora PostgreSQL-Compatible databases to validate all objects. If they all match, then all objects have been successfully validated. | Developer, DBA | 

## Related resources
<a name="validate-database-objects-after-migrating-from-oracle-to-amazon-aurora-postgresql-resources"></a>
+ [Validating database objects after a migration using AWS SCT and AWS DMS](https://aws.amazon.com/blogs/database/validating-database-objects-after-migration-using-aws-sct-and-aws-dms/)
+ [Amazon Aurora Features: PostgreSQL-Compatible Edition](https://aws.amazon.com/rds/aurora/postgresql-features/)

## Attachments
<a name="attachments-7c028960-6dea-46ad-894d-e42cefd50c03"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/7c028960-6dea-46ad-894d-e42cefd50c03/attachments/attachment.zip)