

# Importing data into Oracle on Amazon RDS
<a name="Oracle.Procedural.Importing"></a>

How you import data into an Amazon RDS for Oracle DB instance depends on the following: 
+ The amount of data you have
+ The number of database objects in your database
+ The variety of database objects in your database

For example, you can use the following tools, depending on your requirements:
+ Oracle SQL Developer – Import a simple, 20 MB database.
+ Oracle Data Pump – Import complex databases, or databases that are several hundred megabytes or several terabytes in size. For example, you can transport tablespaces from an on-premises database to your RDS for Oracle DB instance. You can use Amazon S3 or Amazon EFS to transfer the data files and metadata. For more information, see [Migrating using Oracle transportable tablespaces](oracle-migrating-tts.md), [Amazon EFS integration](oracle-efs-integration.md), and [Amazon S3 integration](oracle-s3-integration.md).
+ AWS Database Migration Service (AWS DMS) – Migrate databases without downtime. For more information about AWS DMS, see [ What is AWS Database Migration Service](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) and the blog post [Migrating Oracle databases with near-zero downtime using AWS DMS](https://aws.amazon.com/blogs/database/migrating-oracle-databases-with-near-zero-downtime-using-aws-dms/).

**Important**  
Before you use the preceding migration techniques, we recommend that you back up your database. After you import the data, you can back up your RDS for Oracle DB instances by creating snapshots. Later, you can restore the snapshots. For more information, see [Backing up, restoring, and exporting data](CHAP_CommonTasks.BackupRestore.md).

For many database engines, ongoing replication can continue until you are ready to switch over to the target database. You can use AWS DMS to migrate to RDS for Oracle from either the same database engine or a different engine. If you migrate from a different database engine, you can use the AWS Schema Conversion Tool to migrate schema objects that AWS DMS doesn't migrate.

**Topics**
+ [

# Importing using Oracle SQL Developer
](Oracle.Procedural.Importing.SQLDeveloper.md)
+ [

# Migrating using Oracle transportable tablespaces
](oracle-migrating-tts.md)
+ [

# Importing using Oracle Data Pump
](Oracle.Procedural.Importing.DataPump.md)
+ [

# Importing using Oracle Export/Import
](Oracle.Procedural.Importing.ExportImport.md)
+ [

# Importing using Oracle SQL\$1Loader
](Oracle.Procedural.Importing.SQLLoader.md)
+ [

# Migrating with Oracle materialized views
](Oracle.Procedural.Importing.Materialized.md)

# Importing using Oracle SQL Developer
<a name="Oracle.Procedural.Importing.SQLDeveloper"></a>

Oracle SQL Developer is a graphical Java tool distributed without cost by Oracle. SQL Developer provides options for migrating data between two Oracle databases, or for migrating data from other databases, such as MySQL, to an Oracle database. This tool is best for migrating small databases. 

You can install this tool on your desktop computer (Windows, Linux, or Mac) or on one of your servers. After you install SQL Developer, you can use it to connect to your source and target databases. Use the **Database Copy** command on the Tools menu to copy your data to your RDS for Oracle DB instance. 

To download SQL Developer, go to [http://www.oracle.com/technetwork/developer-tools/sql-developer](http://www.oracle.com/technetwork/developer-tools/sql-developer). 

We recommend that you read the Oracle SQL Developer product documentation before you begin migrating your data. Oracle also has documentation on how to migrate from other databases, including MySQL and SQL Server. For more information, see [http://www.oracle.com/technetwork/database/migration](http://www.oracle.com/technetwork/database/migration) in the Oracle documentation. 

# Migrating using Oracle transportable tablespaces
<a name="oracle-migrating-tts"></a>

You can use the Oracle transportable tablespaces feature to copy a set of tablespaces from an on-premises Oracle database to an RDS for Oracle DB instance. At the physical level, you transfer source data files and metadata files to your target DB instance using either Amazon EFS or Amazon S3. The transportable tablespaces feature uses the `rdsadmin.rdsadmin_transport_util` package. For syntax and semantics of this package, see [Transporting tablespaces](rdsadmin_transport_util.md).

For blog posts that explain how to transport tablespaces, see [Migrate Oracle Databases to AWS using transportable tablespace](https://aws.amazon.com/blogs/database/migrate-oracle-databases-to-aws-using-transportable-tablespace/) and [Amazon RDS for Oracle Transportable Tablespaces using RMAN](https://aws.amazon.com/blogs/database/amazon-rds-for-oracle-transportable-tablespaces-using-rman/).

**Topics**
+ [

## Overview of Oracle transportable tablespaces
](#oracle-migrating-tts.overview)
+ [

## Phase 1: Set up your source host
](#oracle-migrating-tts.setup-phase)
+ [

## Phase 2: Prepare the full tablespace backup
](#oracle-migrating-tts.initial-br-phase)
+ [

## Phase 3: Make and transfer incremental backups
](#oracle-migrating-tts.roll-forward-phase)
+ [

## Phase 4: Transport the tablespaces
](#oracle-migrating-tts.final-br-phase)
+ [

## Phase 5: Validate the transported tablespaces
](#oracle-migrating-tts.validate)
+ [

## Phase 6: Clean up leftover files
](#oracle-migrating-tts.cleanup)

## Overview of Oracle transportable tablespaces
<a name="oracle-migrating-tts.overview"></a>

A transportable tablespace set consists of data files for the set of tablespaces being transported and an export dump file containing tablespace metadata. In a physical migration solution such as transportable tablespaces, you transfer physical files: data files, configuration files, and Data Pump dump files.

**Topics**
+ [

### Advantages and disadvantages of transportable tablespaces
](#oracle-migrating-tts.overview.benefits)
+ [

### Limitations for transportable tablespaces
](#oracle-migrating-tts.limitations)
+ [

### Prerequisites for transportable tablespaces
](#oracle-migrating-tts.requirements)

### Advantages and disadvantages of transportable tablespaces
<a name="oracle-migrating-tts.overview.benefits"></a>

We recommend that you use transportable tablespaces when you need to migrate one or more large tablespaces to RDS with minimum downtime. Transportable tablespaces offer the following advantages over logical migration:
+ Downtime is lower than most other Oracle migration solutions.
+ Because the transportable tablespace feature copies only physical files, it avoids the data integrity errors and logical corruption that can occur in logical migration.
+ No additional license is required.
+ You can migrate a set of tablespaces across different platforms and endianness types, for example, from an Oracle Solaris platform to Linux. However, transporting tablespaces to and from Windows servers isn't supported.
**Note**  
Linux is fully tested and supported. Not all UNIX variations have been tested.

If you use transportable tablespaces, you can transport data using either Amazon S3 or Amazon EFS:
+ When you use EFS, your backups remain in the EFS file system for the duration of the import. You can remove the files afterward. In this technique, you don't need to provision EBS storage for your DB instance. For this reason, we recommend using Amazon EFS instead of S3. For more information, see [Amazon EFS integration](oracle-efs-integration.md).
+ When you use S3, you download RMAN backups to EBS storage attached to your DB instance. The files remain in your EBS storage during the import. After the import, you can free up this space, which remains allocated to your DB instance.

The primary disadvantage of transportable tablespaces is that you need relatively advanced knowledge of Oracle Database. For more information, see [Transporting Tablespaces Between Databases](https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/transporting-data.html#GUID-F7B2B591-AA88-4D16-8DCF-712763923FFB) in the *Oracle Database Administrator’s Guide*.

### Limitations for transportable tablespaces
<a name="oracle-migrating-tts.limitations"></a>

Oracle Database limitations for transportable tablespaces apply when you use this feature in RDS for Oracle. For more information, see [Limitations on Transportable Tablespaces]( https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/transporting-data.html#GUID-DAB51E42-9BBC-4001-B5CB-0ECDBE128787) and [General Limitations on Transporting Data](https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/transporting-data.html#GUID-28800719-6CB9-4A71-95DD-4B61AA603173) in the *Oracle Database Administrator’s Guide*. Note the following additional limitations for transportable tablespaces in RDS for Oracle:
+ Neither the source or target database can use Standard Edition 2 (SE2). Only Enterprise Edition is supported.
+ You can't use an Oracle Database 11g database as a source. The RMAN cross-platform transportable tablespaces feature relies on the RMAN transport mechanism, which Oracle Database 11g doesn't support.
+ You can't migrate data from an RDS for Oracle DB instance using transportable tablespaces. You can only use transportable tablespaces to migrate data to an RDS for Oracle DB instance.
+ The Windows operating system isn't supported.
+ You can't transport tablespaces into a database at a lower release level. The target database must be at the same or later release level as the source database. For example, you can’t transport tablespaces from Oracle Database 21c into Oracle Database 19c.
+ You can't transport administrative tablespaces such as `SYSTEM` and `SYSAUX`.
+ You can't transport non-data objects such as PL/SQL packages, Java classes, views, triggers, sequences, users, roles, and temporary tables. To transport non-data objects, create them manually or use Data Pump metadata export and import. For more information, see [My Oracle Support Note 1454872.1](https://support.oracle.com/knowledge/Oracle%20Cloud/1454872_1.html).
+ You can't transport tablespaces that are encrypted or use encrypted columns.
+ If you transfer files using Amazon S3, the maximum supported file size is 5 TiB.
+ If the source database uses Oracle options such as Spatial, you can't transport tablespaces unless the same options are configured on the target database.
+ You can't transport tablespaces into an RDS for Oracle DB instance in an Oracle replica configuration. As a workaround, you can delete all replicas, transport the tablespaces, and then recreate the replicas.

### Prerequisites for transportable tablespaces
<a name="oracle-migrating-tts.requirements"></a>

 Before you begin, complete the following tasks:
+ Review the requirements for transportable tablespaces described in the following documents in My Oracle Support:
  + [Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)](https://support.oracle.com/epmos/faces/DocumentDisplay?id=2471245.1)
  + [Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable (Doc ID 1454872.1)](https://support.oracle.com/epmos/faces/DocumentDisplay?id=1454872.1)
  + [Primary Note for Transportable Tablespaces (TTS) -- Common Questions and Issues (Doc ID 1166564.1)](https://support.oracle.com/epmos/faces/DocumentDisplay?id=1166564.1)
+ Plan for endianness conversion. If you specify the source platform ID, RDS for Oracle converts the endianness automatically. To learn how to find platform IDs, see [Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (Doc ID 413484.1)](https://support.oracle.com/epmos/faces/DocumentDisplay?id=413484.1). 
+ Make sure that the transportable tablespace feature is enabled on your target DB instance. The feature is enabled only if you don't get an `ORA-20304` error when you run the following query:

  ```
  SELECT * FROM TABLE(rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files);
  ```

  If the transportable tablespace feature isn't enabled, reboot your DB instance. For more information, see [Rebooting a DB instance](USER_RebootInstance.md).
+ Make sure that the time zone file is the same in the source and target databases.
+ Make sure that the database character sets on the source and target databases meet either of the following requirements:
  + The character sets are the same.
  + The character sets are compatible. For a list of compatibility requirements, see [General Limitations on Transporting Data](https://docs.oracle.com/en/database/oracle/oracle-database/19/spmdu/general-limitations-on-transporting-data.html#GUID-28800719-6CB9-4A71-95DD-4B61AA603173) in the Oracle Database documentation.
+ If you plan to transfer files using Amazon S3, do the following:
  + Make sure that an Amazon S3 bucket is available for file transfers, and that the Amazon S3 bucket is in the same AWS Region as your DB instance. For instructions, see [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*.
  + Prepare the Amazon S3 bucket for Amazon RDS integration by following the instructions in [Configuring IAM permissions for RDS for Oracle integration with Amazon S3](oracle-s3-integration.preparing.md).
+ If you plan to transfer files using Amazon EFS, make sure that you have configured EFS according to the instructions in [Amazon EFS integration](oracle-efs-integration.md).
+ We strongly recommend that you turn on automatic backups in your target DB instance. Because the [ metadata import step](#oracle-migrating-tts.transport.import-dmp) can potentially fail, it's important to be able to restore your DB instance to its state before the import, thereby avoiding the necessity to back up, transfer, and import your tablespaces again.

## Phase 1: Set up your source host
<a name="oracle-migrating-tts.setup-phase"></a>

In this step, you copy the transport tablespaces scripts provided by My Oracle Support and set up necessary configuration files. In the following steps, the *source host* is running the database that contains the tablespaces to be transported to your *target instance*.

**To set up your source host**

1. Log in to your source host as the owner of your Oracle home.

1. Make sure that your `ORACLE_HOME` and `ORACLE_SID` environment variables point to your source database.

1. Log in to your database as an administrator, and verify that the time zone version, DB character set, and national character set are the same as in your target database.

   ```
   SELECT * FROM V$TIMEZONE_FILE;
   SELECT * FROM NLS_DATABASE_PARAMETERS 
     WHERE PARAMETER IN ('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');
   ```

1. Set up the transportable tablespace utility as described in [Oracle Support note 2471245.1](https://support.oracle.com/epmos/faces/DocumentDisplay?id=2471245.1). 

   The setup includes editing the `xtt.properties` file on your source host. The following sample `xtt.properties` file specifies backups of three tablespaces in the `/dsk1/backups` directory. These are the tablespaces that you intend to transport to your target DB instance. It also specifies the source platform ID to convert the endianness automatically.
**Note**  
For valid platform IDs, see [Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (Doc ID 413484.1)](https://support.oracle.com/epmos/faces/DocumentDisplay?id=413484.1). 

   ```
   #linux system 
   platformid=13
   #list of tablespaces to transport
   tablespaces=TBS1,TBS2,TBS3
   #location where backup will be generated
   src_scratch_location=/dsk1/backups
   #RMAN command for performing backup
   usermantransport=1
   ```

## Phase 2: Prepare the full tablespace backup
<a name="oracle-migrating-tts.initial-br-phase"></a>

In this phase, you back up your tablespaces for the first time, transfer the backups to your target host, and then restore them using the procedure `rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces`. When this phase is complete, the initial tablespace backups reside on your target DB instance and can be updated with incremental backups.

**Topics**
+ [

### Step 1: Back up the tablespaces on your source host
](#oracle-migrating-tts.backup-full)
+ [

### Step 2: Transfer the backup files to your target DB instance
](#oracle-migrating-tts.transfer-full)
+ [

### Step 3: Import the tablespaces on your target DB instance
](#oracle-migrating-tts.initial-tts-import)

### Step 1: Back up the tablespaces on your source host
<a name="oracle-migrating-tts.backup-full"></a>

In this step, you use the `xttdriver.pl` script to make a full backup of your tablespaces. The output of `xttdriver.pl` is stored in the `TMPDIR` environment variable.

**To back up your tablespaces**

1. If your tablespaces are in read-only mode, log in to your source database as a user with the `ALTER TABLESPACE` privilege, and place your tablespaces in read/write mode. Otherwise, skip to the next step.

   The following example places `tbs1`, `tbs2`, and `tbs3` in read/write mode.

   ```
   ALTER TABLESPACE tbs1 READ WRITE;
   ALTER TABLESPACE tbs2 READ WRITE;
   ALTER TABLESPACE tbs3 READ WRITE;
   ```

1. Back up your tablespaces using the `xttdriver.pl` script. Optionally, you can specify `--debug` to run the script in debug mode.

   ```
   export TMPDIR=location_of_log_files
   cd location_of_xttdriver.pl
   $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
   ```

### Step 2: Transfer the backup files to your target DB instance
<a name="oracle-migrating-tts.transfer-full"></a>

In this step, copy the backup and configuration files from your scratch location to your target DB instance. Choose one of the following options:
+ If the source and target hosts share an Amazon EFS file system, use an operating system utility such as `cp` to copy your backup files and the `res.txt` file from your scratch location to a shared directory. Then skip to [Step 3: Import the tablespaces on your target DB instance](#oracle-migrating-tts.initial-tts-import).
+ If you need to stage your backups to an Amazon S3 bucket, complete the following steps.

![\[Transfer files using either Amazon S3 or Amazon EFS.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/oracle-tts.png)


#### Step 2.2: Upload the backups to your Amazon S3 bucket
<a name="oracle-migrating-tts.upload-full"></a>

Upload your backups and the `res.txt` file from your scratch directory to your Amazon S3 bucket. For more information, see [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html) in the *Amazon Simple Storage Service User Guide*.

#### Step 2.3: Download the backups from your Amazon S3 bucket to your target DB instance
<a name="oracle-migrating-tts.download-full"></a>

In this step, you use the procedure `rdsadmin.rdsadmin_s3_tasks.download_from_s3` to download your backups to your RDS for Oracle DB instance.

**To download your backups from your Amazon S3 bucket**

1. Start SQL\$1Plus or Oracle SQL Developer and log in to your RDS for Oracle DB instance.

1. Download the backups from the Amazon S3 bucket to your target DB instance by using the Amazon RDS procedure `rdsadmin.rdsadmin_s3_tasks.download_from_s3` to d. The following example downloads all of the files from an Amazon S3 bucket named `amzn-s3-demo-bucket` to the `DATA_PUMP_DIR` directory.

   ```
   EXEC UTL_FILE.FREMOVE ('DATA_PUMP_DIR', 'res.txt');
   SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
     p_bucket_name    =>  'amzn-s3-demo-bucket',
     p_directory_name =>  'DATA_PUMP_DIR') 
   AS TASK_ID FROM DUAL;
   ```

   The `SELECT` statement returns the ID of the task in a `VARCHAR2` data type. For more information, see [Downloading files from an Amazon S3 bucket to an Oracle DB instance](oracle-s3-integration.using.md#oracle-s3-integration.using.download).

### Step 3: Import the tablespaces on your target DB instance
<a name="oracle-migrating-tts.initial-tts-import"></a>

To restore your tablespaces to your target DB instance, use the procedure `rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces`. This procedure automatically converts the data files to the correct endian format.

If you import from a platform other than Linux, specify the source platform using the parameter `p_platform_id` when you call `import_xtts_tablespaces`. Make sure that the platform ID that you specify matches the one specified in the `xtt.properties` file in [Step 2: Export tablespace metadata on your source host](#oracle-migrating-tts.transport.export).

**Import the tablespaces on your target DB instance**

1. Start an Oracle SQL client and log in to your target RDS for Oracle DB instance as the master user.

1. Run the procedure `rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces`, specifying the tablespaces to import and the directory containing the backups.

   The following example imports the tablespaces *TBS1*, *TBS2*, and *TBS3* from the directory *DATA\$1PUMP\$1DIR*. The source platform is AIX-Based Systems (64-bit), which has the platform ID of `6`. You can find the platform IDs by querying `V$TRANSPORTABLE_PLATFORM`.

   ```
   VAR task_id CLOB
   
   BEGIN
     :task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces(
           'TBS1,TBS2,TBS3',
           'DATA_PUMP_DIR',
           p_platform_id => 6);
   END;
   /
   
   PRINT task_id
   ```

1. (Optional) Monitor progress by querying the table `rdsadmin.rds_xtts_operation_info`. The `xtts_operation_state` column shows the value `EXECUTING`, `COMPLETED`, or `FAILED`.

   ```
   SELECT * FROM rdsadmin.rds_xtts_operation_info;
   ```
**Note**  
For long-running operations, you can also query `V$SESSION_LONGOPS`, `V$RMAN_STATUS`, and `V$RMAN_OUTPUT`.

1. View the log of the completed import by using the task ID from the previous step.

   ```
   SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'dbtask-'||'&task_id'||'.log'));
   ```

   Make sure that the import succeeded before continuing to the next step.

## Phase 3: Make and transfer incremental backups
<a name="oracle-migrating-tts.roll-forward-phase"></a>

In this phase, you make and transfer incremental backups periodically while the source database is active. This technique reduces the size of your final tablespace backup. If you take multiple incremental backups, you must copy the `res.txt` file after the last incremental backup before you can apply it on the target instance.

The steps are the same as in [Phase 2: Prepare the full tablespace backup](#oracle-migrating-tts.initial-br-phase), except that the import step is optional.

## Phase 4: Transport the tablespaces
<a name="oracle-migrating-tts.final-br-phase"></a>

In this phase, you back up your read-only tablespaces and export Data Pump metadata, transfer these files to your target host, and import both the tablespaces and the metadata.

**Topics**
+ [

### Step 1: Back up your read-only tablespaces
](#oracle-migrating-tts.final-backup)
+ [

### Step 2: Export tablespace metadata on your source host
](#oracle-migrating-tts.transport.export)
+ [

### Step 3: (Amazon S3 only) Transfer the backup and export files to your target DB instance
](#oracle-migrating-tts.transport)
+ [

### Step 4: Import the tablespaces on your target DB instance
](#oracle-migrating-tts.restore-full)
+ [

### Step 5: Import tablespace metadata on your target DB instance
](#oracle-migrating-tts.transport.import-dmp)

### Step 1: Back up your read-only tablespaces
<a name="oracle-migrating-tts.final-backup"></a>

This step is identical to [Step 1: Back up the tablespaces on your source host](#oracle-migrating-tts.backup-full), with one key difference: you place your tablespaces in read-only mode before backing up your tablespaces for the last time.

The following example places `tbs1`, `tbs2`, and `tbs3` in read-only mode.

```
ALTER TABLESPACE tbs1 READ ONLY;
ALTER TABLESPACE tbs2 READ ONLY;
ALTER TABLESPACE tbs3 READ ONLY;
```

### Step 2: Export tablespace metadata on your source host
<a name="oracle-migrating-tts.transport.export"></a>

Export your tablespace metadata by running the `expdb` utility on your source host. The following example exports tablespaces *TBS1*, *TBS2*, and *TBS3* to dump file *xttdump.dmp* in directory *DATA\$1PUMP\$1DIR*.

```
expdp username/pwd \
dumpfile=xttdump.dmp \
directory=DATA_PUMP_DIR \
statistics=NONE \
transport_tablespaces=TBS1,TBS2,TBS3 \
transport_full_check=y \
logfile=tts_export.log
```

If *DATA\$1PUMP\$1DIR* is a shared directory in Amazon EFS, skip to [Step 4: Import the tablespaces on your target DB instance](#oracle-migrating-tts.restore-full).

### Step 3: (Amazon S3 only) Transfer the backup and export files to your target DB instance
<a name="oracle-migrating-tts.transport"></a>

If you are using Amazon S3 to stage your tablespace backups and Data Pump export file, complete the following steps.

#### Step 3.1: Upload the backups and dump file from your source host to your Amazon S3 bucket
<a name="oracle-migrating-tts.transport.upload-dmp"></a>

Upload your backup and dump files from your source host to your Amazon S3 bucket. For more information, see [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html) in the *Amazon Simple Storage Service User Guide*.

#### Step 3.2: Download the backups and dump file from your Amazon S3 bucket to your target DB instance
<a name="oracle-migrating-tts.transport.download-dmp"></a>

In this step, you use the procedure `rdsadmin.rdsadmin_s3_tasks.download_from_s3` to download your backups and dump file to your RDS for Oracle DB instance. Follow the steps in [Step 2.3: Download the backups from your Amazon S3 bucket to your target DB instance](#oracle-migrating-tts.download-full).

### Step 4: Import the tablespaces on your target DB instance
<a name="oracle-migrating-tts.restore-full"></a>

Use the procedure `rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces` to restore the tablespaces. For syntax and semantics of this procedure, see [Importing transported tablespaces to your DB instance](rdsadmin_transport_util_import_xtts_tablespaces.md)

**Important**  
After you complete your final tablespace import, the next step is [importing the Oracle Data Pump metadata](#oracle-migrating-tts.transport.export). If the import fails, it's important to return your DB instance to its state before the failure. Thus, we recommend that you create a DB snapshot of your DB instance by following the instructions in [Creating a DB snapshot for a Single-AZ DB instance for Amazon RDS](USER_CreateSnapshot.md). The snapshot will contain all imported tablespaces, so if the import fails, you don’t need to repeat the backup and import process.   
If your target DB instance has automatic backups turned on, and Amazon RDS doesn't detect that a valid snapshot was initiated before you import the metadata, RDS attempts to create a snapshot. Depending on your instance activity, this snapshot might or might not succeed. If a valid snapshot isn't detected or a snapshot can't be initiated, the metadata import exits with errors.

**Import the tablespaces on your target DB instance**

1. Start an Oracle SQL client and log in to your target RDS for Oracle DB instance as the master user.

1. Run the procedure `rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces`, specifying the tablespaces to import and the directory containing the backups.

   The following example imports the tablespaces *TBS1*, *TBS2*, and *TBS3* from the directory *DATA\$1PUMP\$1DIR*.

   ```
   BEGIN
     :task_id:=rdsadmin.rdsadmin_transport_util.import_xtts_tablespaces('TBS1,TBS2,TBS3','DATA_PUMP_DIR');
   END;
   /
   PRINT task_id
   ```

1. (Optional) Monitor progress by querying the table `rdsadmin.rds_xtts_operation_info`. The `xtts_operation_state` column shows the value `EXECUTING`, `COMPLETED`, or `FAILED`.

   ```
   SELECT * FROM rdsadmin.rds_xtts_operation_info;
   ```
**Note**  
For long-running operations, you can also query `V$SESSION_LONGOPS`, `V$RMAN_STATUS`, and `V$RMAN_OUTPUT`.

1. View the log of the completed import by using the task ID from the previous step.

   ```
   SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'dbtask-'||'&task_id'||'.log'));
   ```

   Make sure that the import succeeded before continuing to the next step.

1. Take a manual DB snapshot by following the instructions in [Creating a DB snapshot for a Single-AZ DB instance for Amazon RDS](USER_CreateSnapshot.md).

### Step 5: Import tablespace metadata on your target DB instance
<a name="oracle-migrating-tts.transport.import-dmp"></a>

In this step, you import the transportable tablespace metadata into your RDS for Oracle DB instance using the procedure `rdsadmin.rdsadmin_transport_util.import_xtts_metadata`. For syntax and semantics of this procedure, see [Importing transportable tablespace metadata into your DB instance](rdsadmin_transport_util_import_xtts_metadata.md). During the operation, the status of the import is shown in the table `rdsadmin.rds_xtts_operation_info`.

**Important**  
Before you import metadata, we strongly recommend that you confirm that a DB snapshot was successfully created after you imported your tablespaces. If the import step fails, restore your DB instance, address the import errors, and then attempt the import again.

**Import the Data Pump metadata into your RDS for Oracle DB instance**

1. Start your Oracle SQL client and log in to your target DB instance as the master user.

1. Create the users that own schemas in your transported tablespaces, if these users don't already exist.

   ```
   CREATE USER tbs_owner IDENTIFIED BY password;
   ```

1. Import the metadata, specifying the name of the dump file and its directory location.

   ```
   BEGIN
     rdsadmin.rdsadmin_transport_util.import_xtts_metadata('xttdump.dmp','DATA_PUMP_DIR');
   END;
   /
   ```

1. (Optional) Query the transportable tablespace history table to see the status of the metadata import.

   ```
   SELECT * FROM rdsadmin.rds_xtts_operation_info;
   ```

   When the operation completes, your tablespaces are in read-only mode.

1. (Optional) View the log file.

   The following example lists the contents of the BDUMP directory and then queries the import log.

   ```
   SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'BDUMP'));
   
   SELECT * FROM TABLE(rdsadmin.rds_file_util.read_text_file(
     p_directory => 'BDUMP',
     p_filename => 'rds-xtts-import_xtts_metadata-2023-05-22.01-52-35.560858000.log'));
   ```

## Phase 5: Validate the transported tablespaces
<a name="oracle-migrating-tts.validate"></a>

In this optional step, you validate your transported tablespaces using the procedure `rdsadmin.rdsadmin_rman_util.validate_tablespace`, and then place your tablespaces in read/write mode.

**To validate the transported data**

1. Start SQL\$1Plus or SQL Developer and log in to your target DB instance as the master user.

1. Validate the tablespaces using the procedure `rdsadmin.rdsadmin_rman_util.validate_tablespace`.

   ```
   SET SERVEROUTPUT ON
   BEGIN
       rdsadmin.rdsadmin_rman_util.validate_tablespace(
           p_tablespace_name     => 'TBS1',
           p_validation_type     => 'PHYSICAL+LOGICAL',
           p_rman_to_dbms_output => TRUE);
       rdsadmin.rdsadmin_rman_util.validate_tablespace(
           p_tablespace_name     => 'TBS2',
           p_validation_type     => 'PHYSICAL+LOGICAL',
           p_rman_to_dbms_output => TRUE);
       rdsadmin.rdsadmin_rman_util.validate_tablespace(
           p_tablespace_name     => 'TBS3',
           p_validation_type     => 'PHYSICAL+LOGICAL',
           p_rman_to_dbms_output => TRUE);
   END;
   /
   ```

1. Place your tablespaces in read/write mode.

   ```
   ALTER TABLESPACE TBS1 READ WRITE;
   ALTER TABLESPACE TBS2 READ WRITE;
   ALTER TABLESPACE TBS3 READ WRITE;
   ```

## Phase 6: Clean up leftover files
<a name="oracle-migrating-tts.cleanup"></a>

In this optional step, you remove any unneeded files. Use the `rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files` procedure to list data files that were orphaned after a tablespace import, and then use `rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files` procedure to delete them. For syntax and semantics of these procedures, see [Listing orphaned files after a tablespace import](rdsadmin_transport_util_list_xtts_orphan_files.md) and [Deleting orphaned data files after a tablespace import](rdsadmin_transport_util_cleanup_incomplete_xtts_import.md).

**To clean up leftover files**

1. Remove old backups in *DATA\$1PUMP\$1DIR* as follows:

   1. List the backup files by running `rdsadmin.rdsadmin_file_util.listdir`.

      ```
      SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => 'DATA_PUMP_DIR'));
      ```

   1. Remove the backups one by one by calling `UTL_FILE.FREMOVE`.

      ```
      EXEC UTL_FILE.FREMOVE ('DATA_PUMP_DIR', 'backup_filename');
      ```

1. If you imported tablespaces but didn't import metadata for these tablespaces, you can delete the orphaned data files as follows:

   1. List the orphaned data files that you need to delete. The following example runs the procedure `rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files`.

      ```
      SQL> SELECT * FROM TABLE(rdsadmin.rdsadmin_transport_util.list_xtts_orphan_files);
      
      FILENAME       FILESIZE
      -------------- ---------
      datafile_7.dbf 104865792
      datafile_8.dbf 104865792
      ```

   1. Delete the orphaned files by running the procedure `rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import`.

      ```
      BEGIN
        rdsadmin.rdsadmin_transport_util.cleanup_incomplete_xtts_import('DATA_PUMP_DIR');
      END;
      /
      ```

      The cleanup operation generates a log file that uses the name format `rds-xtts-delete_xtts_orphaned_files-YYYY-MM-DD.HH24-MI-SS.FF.log` in the `BDUMP` directory.

   1. Read the log file generated in the previous step. The following example reads log `rds-xtts-delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log`.

      ```
      SELECT * 
      FROM TABLE(rdsadmin.rds_file_util.read_text_file(
             p_directory => 'BDUMP',
             p_filename  => 'rds-xtts-delete_xtts_orphaned_files-2023-06-01.09-33-11.868894000.log'));
      
      TEXT
      --------------------------------------------------------------------------------
      orphan transported datafile datafile_7.dbf deleted.
      orphan transported datafile datafile_8.dbf deleted.
      ```

1. If you imported tablespaces and imported metadata for these tablespaces, but you encountered compatibility errors or other Oracle Data Pump issues, clean up the partially transported data files as follows:

   1. List the tablespaces that contain partially transported data files by querying `DBA_TABLESPACES`.

      ```
      SQL> SELECT TABLESPACE_NAME FROM DBA_TABLESPACES WHERE PLUGGED_IN='YES';
      
      TABLESPACE_NAME
      --------------------------------------------------------------------------------
      TBS_3
      ```

   1. Drop the tablespaces and the partially transported data files.

      ```
      DROP TABLESPACE TBS_3 INCLUDING CONTENTS AND DATAFILES;
      ```

# Importing using Oracle Data Pump
<a name="Oracle.Procedural.Importing.DataPump"></a>

Oracle Data Pump is a utility that allows you to export Oracle data to a dump file and import it into another Oracle database. It is a long-term replacement for the Oracle Export/Import utilities. Oracle Data Pump is the recommended way to move large amounts of data from an Oracle database to an Amazon RDS DB instance.

The examples in this section show one way to import data into an Oracle database, but Oracle Data Pump supports other techniques. For more information, see the [Oracle Database documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump.html#GUID-501A9908-BCC5-434C-8853-9A6096766B5A).

The examples in this section use the `DBMS_DATAPUMP` package. You can accomplish the same tasks using the Oracle Data Pump command line utilities `impdp` and `expdp`. You can install these utilities on a remote host as part of an Oracle Client installation, including Oracle Instant Client. For more information, see [How do I use Oracle Instant Client to run Data Pump Import or Export for my Amazon RDS for Oracle DB instance?](https://aws.amazon.com/premiumsupport/knowledge-center/rds-oracle-instant-client-datapump/)

**Topics**
+ [

## Overview of Oracle Data Pump
](#Oracle.Procedural.Importing.DataPump.Overview)
+ [

## Importing data with Oracle Data Pump and an Amazon S3 bucket
](#Oracle.Procedural.Importing.DataPump.S3)
+ [

## Importing data with Oracle Data Pump and a database link
](#Oracle.Procedural.Importing.DataPump.DBLink)

## Overview of Oracle Data Pump
<a name="Oracle.Procedural.Importing.DataPump.Overview"></a>

Oracle Data Pump is made up of the following components:
+ Command-line clients `expdp` and `impdp`
+ The `DBMS_DATAPUMP` PL/SQL package
+ The `DBMS_METADATA` PL/SQL package

You can use Oracle Data Pump for the following scenarios:
+ Import data from an Oracle database, either on-premises or on an Amazon EC2 instance, to an RDS for Oracle DB instance.
+ Import data from an RDS for Oracle DB instance to an Oracle database, either on-premises or on an Amazon EC2 instance.
+ Import data between RDS for Oracle DB instances, for example, to migrate data from EC2-Classic to VPC.

To download Oracle Data Pump utilities, see [Oracle database software downloads](http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html) on the Oracle Technology Network website. For compatibility considerations when migrating between versions of Oracle Database, see the [ Oracle Database documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-overview.html#GUID-BAA3B679-A758-4D55-9820-432D9EB83C68).

### Oracle Data Pump workflow
<a name="Oracle.Procedural.Importing.DataPump.Overview.how-it-works"></a>

Typically, you use Oracle Data Pump in the following stages:

1. Export your data into a dump file on the source database.

1. Upload your dump file to your destination RDS for Oracle DB instance. You can transfer using an Amazon S3 bucket or by using a database link between the two databases.

1. Import the data from your dump file into your RDS for Oracle DB instance.

### Oracle Data Pump best practices
<a name="Oracle.Procedural.Importing.DataPump.Overview.best-practices"></a>

When you use Oracle Data Pump to import data into an RDS for Oracle instance, we recommend the following best practices:
+ Perform imports in `schema` or `table` mode to import specific schemas and objects.
+ Limit the schemas you import to those required by your application.
+ Don't import in `full` mode or import schemas for system-maintained components.

  Because RDS for Oracle doesn't allow access to `SYS` or `SYSDBA` administrative users, these actions might damage the Oracle data dictionary and affect the stability of your database.
+ When loading large amounts of data, do the following:

  1. Transfer the dump file to the target RDS for Oracle DB instance.

  1. Take a DB snapshot of your instance.

  1. Test the import to verify that it succeeds.

  If database components are invalidated, you can delete the DB instance and re-create it from the DB snapshot. The restored DB instance includes any dump files staged on the DB instance when you took the DB snapshot.
+ Don't import dump files that were created using the Oracle Data Pump export parameters `TRANSPORT_TABLESPACES`, `TRANSPORTABLE`, or `TRANSPORT_FULL_CHECK`. RDS for Oracle DB instances don't support importing these dump files.
+ Don't import dump files that contain Oracle Scheduler objects in `SYS`, `SYSTEM`, `RDSADMIN`, `RDSSEC`, and `RDS_DATAGUARD`, and belong to the following categories:
  + Jobs
  + Programs
  + Schedules
  + Chains
  + Rules
  + Evaluation contexts
  + Rule sets

  RDS for Oracle DB instances don't support importing these dump files. 
+ To exclude unsupported Oracle Scheduler objects, use additional directives during the Data Pump export. If you use `DBMS_DATAPUMP`, you can add an additional `METADATA_FILTER` before the `DBMS_METADATA.START_JOB`:

  ```
  DBMS_DATAPUMP.METADATA_FILTER(
    v_hdnl,
    'EXCLUDE_NAME_EXPR',
    q'[IN (SELECT NAME FROM SYS.OBJ$ 
           WHERE TYPE# IN (66,67,74,79,59,62,46) 
           AND OWNER# IN
             (SELECT USER# FROM SYS.USER$ 
              WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
              )
          )
    ]',
    'PROCOBJ'
  );
  ```

  If you use `expdp`, create a parameter file that contains the `exclude` directive shown in the following example. Then use `PARFILE=parameter_file` with your `expdp` command.

  ```
  exclude=procobj:"IN 
    (SELECT NAME FROM sys.OBJ$
     WHERE TYPE# IN (66,67,74,79,59,62,46) 
     AND OWNER# IN 
       (SELECT USER# FROM SYS.USER$ 
        WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
       )
    )"
  ```

## Importing data with Oracle Data Pump and an Amazon S3 bucket
<a name="Oracle.Procedural.Importing.DataPump.S3"></a>

The following import process uses Oracle Data Pump and an Amazon S3 bucket. The steps are as follows:

1. Export data on the source database using the Oracle [DBMS\$1DATAPUMP](https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_DATAPUMP.html) package.

1. Place the dump file in an Amazon S3 bucket.

1. Download the dump file from the Amazon S3 bucket to the `DATA_PUMP_DIR` directory on the target RDS for Oracle DB instance. 

1. Import the data from the copied dump file into the RDS for Oracle DB instance using the package `DBMS_DATAPUMP`.

**Topics**
+ [

### Requirements for Importing data with Oracle Data Pump and an Amazon S3 bucket
](#Oracle.Procedural.Importing.DataPumpS3.requirements)
+ [

### Step 1: Grant privileges to the database user on the RDS for Oracle target DB instance
](#Oracle.Procedural.Importing.DataPumpS3.Step1)
+ [

### Step 2: Export data into a dump file using DBMS\$1DATAPUMP
](#Oracle.Procedural.Importing.DataPumpS3.Step2)
+ [

### Step 3: Upload the dump file to your Amazon S3 bucket
](#Oracle.Procedural.Importing.DataPumpS3.Step3)
+ [

### Step 4: Download the dump file from your Amazon S3 bucket to your target DB instance
](#Oracle.Procedural.Importing.DataPumpS3.Step4)
+ [

### Step 5: Import your dump file into your target DB instance using DBMS\$1DATAPUMP
](#Oracle.Procedural.Importing.DataPumpS3.Step5)
+ [

### Step 6: Clean up
](#Oracle.Procedural.Importing.DataPumpS3.Step6)

### Requirements for Importing data with Oracle Data Pump and an Amazon S3 bucket
<a name="Oracle.Procedural.Importing.DataPumpS3.requirements"></a>

The process has the following requirements:
+ Make sure that an Amazon S3 bucket is available for file transfers, and that the Amazon S3 bucket is in the same AWS Region as the DB instance. For instructions, see [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*.
+ The object that you upload into the Amazon S3 bucket must be 5 TB or less. For more information about working with objects in Amazon S3, see [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html).
**Note**  
If you dump file exceeds 5 TB, you can run the Oracle Data Pump export with the parallel option. This operation spreads the data into multiple dump files so that you do not exceed the 5 TB limit for individual files.
+ You must prepare the Amazon S3 bucket for Amazon RDS integration by following the instructions in [Configuring IAM permissions for RDS for Oracle integration with Amazon S3](oracle-s3-integration.preparing.md).
+ You must ensure that you have enough storage space to store the dump file on the source instance and the target DB instance.

**Note**  
This process imports a dump file into the `DATA_PUMP_DIR` directory, a preconfigured directory on all Oracle DB instances. This directory is located on the same storage volume as your data files. When you import the dump file, the existing Oracle data files use more space. Thus, you should make sure that your DB instance can accommodate that additional use of space. The imported dump file is not automatically deleted or purged from the `DATA_PUMP_DIR` directory. To remove the imported dump file, use [UTL\$1FILE.FREMOVE](https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/UTL_FILE.html#GUID-09B09C2A-2C21-4F70-BF04-D0EEA7B59CAF), found on the Oracle website. 

### Step 1: Grant privileges to the database user on the RDS for Oracle target DB instance
<a name="Oracle.Procedural.Importing.DataPumpS3.Step1"></a>

In this step, you create the schemas into which you plan to import data and grant the users necessary privileges.

**To create users and grant necessary privileges on the RDS for Oracle target instance**

1. Use SQL\$1Plus or Oracle SQL Developer to log in as the master user to the RDS for Oracle DB instance into which the data will be imported. For information about connecting to a DB instance, see [Connecting to your Oracle DB instance](USER_ConnectToOracleInstance.md).

1. Create the required tablespaces before you import the data. For more information, see [Creating and sizing tablespaces in RDS for Oracle](Appendix.Oracle.CommonDBATasks.TablespacesAndDatafiles.md#Appendix.Oracle.CommonDBATasks.CreatingTablespacesAndDatafiles).

1. Create the user account and grant the necessary permissions and roles if the user account into which the data is imported doesn't exist. If you plan to import data into multiple user schemas, create each user account and grant the necessary privileges and roles to it.

   For example, the following SQL statements create a new user and grant the necessary permissions and roles to import the data into the schema owned by this user. Replace `schema_1` with the name of your schema in this step and in the following steps.

   ```
   CREATE USER schema_1 IDENTIFIED BY my_password;
   GRANT CREATE SESSION, RESOURCE TO schema_1;
   ALTER USER schema_1 QUOTA 100M ON users;
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

   The preceding statements grant the new user the `CREATE SESSION` privilege and the `RESOURCE` role. You might need additional privileges and roles depending on the database objects that you import.

### Step 2: Export data into a dump file using DBMS\$1DATAPUMP
<a name="Oracle.Procedural.Importing.DataPumpS3.Step2"></a>

To create a dump file, use the `DBMS_DATAPUMP` package.

**To export Oracle data into a dump file**

1. Use SQL Plus or Oracle SQL Developer to connect to the source RDS for Oracle DB instance with an administrative user. If the source database is an RDS for Oracle DB instance, connect with the Amazon RDS master user.

1. Export the data by calling `DBMS_DATAPUMP` procedures.

   The following script exports the `SCHEMA_1` schema into a dump file named `sample.dmp` in the `DATA_PUMP_DIR` directory. Replace `SCHEMA_1` with the name of the schema that you want to export.

   ```
   DECLARE
     v_hdnl NUMBER;
   BEGIN
     v_hdnl := DBMS_DATAPUMP.OPEN(
       operation => 'EXPORT', 
       job_mode  => 'SCHEMA', 
       job_name  => null
     );
     DBMS_DATAPUMP.ADD_FILE( 
       handle    => v_hdnl         , 
       filename  => 'sample.dmp'   , 
       directory => 'DATA_PUMP_DIR', 
       filetype  => dbms_datapump.ku$_file_type_dump_file
     );
     DBMS_DATAPUMP.ADD_FILE( 
       handle    => v_hdnl, 
       filename  => 'sample_exp.log', 
       directory => 'DATA_PUMP_DIR' , 
       filetype  => dbms_datapump.ku$_file_type_log_file
     );
     DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
     DBMS_DATAPUMP.METADATA_FILTER(
       v_hdnl,
       'EXCLUDE_NAME_EXPR',
       q'[IN (SELECT NAME FROM SYS.OBJ$ 
              WHERE TYPE# IN (66,67,74,79,59,62,46) 
              AND OWNER# IN 
                (SELECT USER# FROM SYS.USER$ 
                 WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
                )
             )
       ]',
       'PROCOBJ'
     );
     DBMS_DATAPUMP.START_JOB(v_hdnl);
   END;
   /
   ```
**Note**  
Data Pump starts jobs asynchronously. For information about monitoring a Data Pump job, see [ Monitoring job status](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-overview.html#GUID-E365D74E-12CD-495C-BA23-5A55F679C7E7) in the Oracle documentation. 

1. (Optional) View the contents of the export log by calling the `rdsadmin.rds_file_util.read_text_file` procedure. For more information, see [Reading files in a DB instance directory](Appendix.Oracle.CommonDBATasks.Misc.md#Appendix.Oracle.CommonDBATasks.ReadingFiles).

### Step 3: Upload the dump file to your Amazon S3 bucket
<a name="Oracle.Procedural.Importing.DataPumpS3.Step3"></a>

Use the Amazon RDS procedure `rdsadmin.rdsadmin_s3_tasks.upload_to_s3` to copy the dump file to the Amazon S3 bucket. The following example uploads all of the files from the `DATA_PUMP_DIR` directory to an Amazon S3 bucket named `amzn-s3-demo-bucket`.

```
SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
  p_bucket_name    =>  'amzn-s3-demo-bucket',       
  p_directory_name =>  'DATA_PUMP_DIR') 
AS TASK_ID FROM DUAL;
```

The `SELECT` statement returns the ID of the task in a `VARCHAR2` data type. For more information, see [Uploading files from your RDS for Oracle DB instance to an Amazon S3 bucket](oracle-s3-integration.using.md#oracle-s3-integration.using.upload).

### Step 4: Download the dump file from your Amazon S3 bucket to your target DB instance
<a name="Oracle.Procedural.Importing.DataPumpS3.Step4"></a>

Perform this step using the Amazon RDS procedure `rdsadmin.rdsadmin_s3_tasks.download_from_s3`. When you download a file to a directory, the procedure `download_from_s3` skips the download if an identically named file already exists in the directory. To remove a file from the download directory, use [UTL\$1FILE.FREMOVE](https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/UTL_FILE.html#GUID-09B09C2A-2C21-4F70-BF04-D0EEA7B59CAF), found on the Oracle website.

**To download your dump file**

1. Start SQL\$1Plus or Oracle SQL Developer and log in as the master on your Amazon RDS target Oracle DB instance

1. Download the dump file using the Amazon RDS procedure `rdsadmin.rdsadmin_s3_tasks.download_from_s3`.

   The following example downloads all files from an Amazon S3 bucket named `amzn-s3-demo-bucket` to the directory `DATA_PUMP_DIR`.

   ```
   SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
     p_bucket_name    =>  'amzn-s3-demo-bucket',
     p_directory_name =>  'DATA_PUMP_DIR')
   AS TASK_ID FROM DUAL;
   ```

   The `SELECT` statement returns the ID of the task in a `VARCHAR2` data type. For more information, see [Downloading files from an Amazon S3 bucket to an Oracle DB instance](oracle-s3-integration.using.md#oracle-s3-integration.using.download).

### Step 5: Import your dump file into your target DB instance using DBMS\$1DATAPUMP
<a name="Oracle.Procedural.Importing.DataPumpS3.Step5"></a>

Use `DBMS_DATAPUMP` to import the schema into your RDS for Oracle DB instance. Additional options such as `METADATA_REMAP` might be required.

**To import data into your target DB instance**

1. Start SQL\$1Plus or SQL Developer and log in as the master user to your RDS for Oracle DB instance.

1. Import the data by calling `DBMS_DATAPUMP` procedures.

   The following example imports the *SCHEMA\$11* data from `sample_copied.dmp` into your target DB instance.

   ```
   DECLARE
     v_hdnl NUMBER;
   BEGIN
     v_hdnl := DBMS_DATAPUMP.OPEN( 
       operation => 'IMPORT', 
       job_mode  => 'SCHEMA', 
       job_name  => null);
     DBMS_DATAPUMP.ADD_FILE( 
       handle    => v_hdnl, 
       filename  => 'sample_copied.dmp', 
       directory => 'DATA_PUMP_DIR', 
       filetype  => dbms_datapump.ku$_file_type_dump_file);
     DBMS_DATAPUMP.ADD_FILE( 
       handle    => v_hdnl, 
       filename  => 'sample_imp.log', 
       directory => 'DATA_PUMP_DIR', 
       filetype  => dbms_datapump.ku$_file_type_log_file);
     DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
     DBMS_DATAPUMP.START_JOB(v_hdnl);
   END;
   /
   ```
**Note**  
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump job, see [ Monitoring job status](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-overview.html#GUID-E365D74E-12CD-495C-BA23-5A55F679C7E7) in the Oracle documentation. You can view the contents of the import log by using the `rdsadmin.rds_file_util.read_text_file` procedure. For more information, see [Reading files in a DB instance directory](Appendix.Oracle.CommonDBATasks.Misc.md#Appendix.Oracle.CommonDBATasks.ReadingFiles).

1. Verify the data import by listing the schema tables on your target DB instance.

   For example, the following query returns the number of tables for `SCHEMA_1`. 

   ```
   SELECT COUNT(*) FROM DBA_TABLES WHERE OWNER='SCHEMA_1';
   ```

### Step 6: Clean up
<a name="Oracle.Procedural.Importing.DataPumpS3.Step6"></a>

After the data has been imported, you can delete the files that you don't want to keep.

**To remove unneeded files**

1. Start SQL\$1Plus or SQL Developer and log in as the master user to your RDS for Oracle DB instance.

1. List the files in `DATA_PUMP_DIR` using the following command.

   ```
   SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) ORDER BY MTIME;
   ```

1. Delete files in `DATA_PUMP_DIR` that you no longer require, use the following command.

   ```
   EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','filename');
   ```

   For example, the following command deletes the file named `sample_copied.dmp`.

   ```
   EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','sample_copied.dmp'); 
   ```

## Importing data with Oracle Data Pump and a database link
<a name="Oracle.Procedural.Importing.DataPump.DBLink"></a>

The following import process uses Oracle Data Pump and the Oracle [DBMS\$1FILE\$1TRANSFER](https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_FILE_TRANSFER.html) package. The steps are as follows:

1. Connect to a source Oracle database, which can be an on-premises database, Amazon EC2 instance, or an RDS for Oracle DB instance. 

1. Export data using the [DBMS\$1DATAPUMP](https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_DATAPUMP.html) package.

1. Use `DBMS_FILE_TRANSFER.PUT_FILE` to copy the dump file from the Oracle database to the `DATA_PUMP_DIR` directory on the target RDS for Oracle DB instance that is connected using a database link. 

1. Import the data from the copied dump file into the RDS for Oracle DB instance using the ` DBMS_DATAPUMP` package.

The import process using Oracle Data Pump and the `DBMS_FILE_TRANSFER` package has the following steps.

**Topics**
+ [

### Requirements for importing data with Oracle Data Pump and a database link
](#Oracle.Procedural.Importing.DataPumpDBLink.requirements)
+ [

### Step 1: Grant privileges to the user on the RDS for Oracle target DB instance
](#Oracle.Procedural.Importing.DataPumpDBLink.Step1)
+ [

### Step 2: Grant privileges to the user on the source database
](#Oracle.Procedural.Importing.DataPumpDBLink.Step2)
+ [

### Step 3: Create a dump file using DBMS\$1DATAPUMP
](#Oracle.Procedural.Importing.DataPumpDBLink.Step3)
+ [

### Step 4: Create a database link to the target DB instance
](#Oracle.Procedural.Importing.DataPumpDBLink.Step4)
+ [

### Step 5: Copy the exported dump file to the target DB instance using DBMS\$1FILE\$1TRANSFER
](#Oracle.Procedural.Importing.DataPumpDBLink.Step5)
+ [

### Step 6: Import the data file to the target DB instance using DBMS\$1DATAPUMP
](#Oracle.Procedural.Importing.DataPumpDBLink.Step6)
+ [

### Step 7: Clean up
](#Oracle.Procedural.Importing.DataPumpDBLink.Step7)

### Requirements for importing data with Oracle Data Pump and a database link
<a name="Oracle.Procedural.Importing.DataPumpDBLink.requirements"></a>

The process has the following requirements:
+ You must have execute privileges on the `DBMS_FILE_TRANSFER` and `DBMS_DATAPUMP` packages.
+ You must have write privileges to the `DATA_PUMP_DIR` directory on the source DB instance.
+ You must ensure that you have enough storage space to store the dump file on the source instance and the target DB instance.

**Note**  
This process imports a dump file into the `DATA_PUMP_DIR` directory, a preconfigured directory on all Oracle DB instances. This directory is located on the same storage volume as your data files. When you import the dump file, the existing Oracle data files use more space. Thus, you should make sure that your DB instance can accommodate that additional use of space. The imported dump file is not automatically deleted or purged from the `DATA_PUMP_DIR` directory. To remove the imported dump file, use [UTL\$1FILE.FREMOVE](https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/UTL_FILE.html#GUID-09B09C2A-2C21-4F70-BF04-D0EEA7B59CAF), found on the Oracle website. 

### Step 1: Grant privileges to the user on the RDS for Oracle target DB instance
<a name="Oracle.Procedural.Importing.DataPumpDBLink.Step1"></a>

To grant privileges to the user on the RDS for Oracle target DB instance, take the following steps:

1. Use SQL Plus or Oracle SQL Developer to connect to the RDS for Oracle DB instance into which you intend to import the data. Connect as the Amazon RDS master user. For information about connecting to the DB instance, see [Connecting to your Oracle DB instance](USER_ConnectToOracleInstance.md).

1. Create the required tablespaces before you import the data. For more information, see [Creating and sizing tablespaces in RDS for Oracle](Appendix.Oracle.CommonDBATasks.TablespacesAndDatafiles.md#Appendix.Oracle.CommonDBATasks.CreatingTablespacesAndDatafiles).

1. If the user account into which the data is imported doesn't exist, create the user account and grant the necessary permissions and roles. If you plan to import data into multiple user schemas, create each user account and grant the necessary privileges and roles to it.

   For example, the following commands create a new user named *schema\$11* and grant the necessary permissions and roles to import the data into the schema for this user.

   ```
   CREATE USER schema_1 IDENTIFIED BY my-password;
   GRANT CREATE SESSION, RESOURCE TO schema_1;
   ALTER USER schema_1 QUOTA 100M ON users;
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

   The preceding example grants the new user the `CREATE SESSION` privilege and the `RESOURCE` role. Additional privileges and roles might be required depending on the database objects that you import. 
**Note**  
Replace `schema_1` with the name of your schema in this step and in the following steps.

### Step 2: Grant privileges to the user on the source database
<a name="Oracle.Procedural.Importing.DataPumpDBLink.Step2"></a>

Use SQL\$1Plus or Oracle SQL Developer to connect to the RDS for Oracle DB instance that contains the data to be imported. If necessary, create a user account and grant the necessary permissions. 

**Note**  
If the source database is an Amazon RDS instance, you can skip this step. You use your Amazon RDS master user account to perform the export.

The following commands create a new user and grant the necessary permissions.

```
CREATE USER export_user IDENTIFIED BY my-password;
GRANT CREATE SESSION, CREATE TABLE, CREATE DATABASE LINK TO export_user;
ALTER USER export_user QUOTA 100M ON users;
GRANT READ, WRITE ON DIRECTORY data_pump_dir TO export_user;
GRANT SELECT_CATALOG_ROLE TO export_user;
GRANT EXECUTE ON DBMS_DATAPUMP TO export_user;
GRANT EXECUTE ON DBMS_FILE_TRANSFER TO export_user;
```

**Note**  
Specify a password other than the prompt shown here as a security best practice.

### Step 3: Create a dump file using DBMS\$1DATAPUMP
<a name="Oracle.Procedural.Importing.DataPumpDBLink.Step3"></a>

To create a dump file, do the following:

1. Use SQL\$1Plus or Oracle SQL Developer to connect to the source Oracle instance with an administrative user or with the user you created in step 2. If the source database is an Amazon RDS for Oracle DB instance, connect with the Amazon RDS master user.

1. Create a dump file using the Oracle Data Pump utility.

   The following script creates a dump file named *sample.dmp* in the `DATA_PUMP_DIR` directory. 

   ```
   DECLARE
     v_hdnl NUMBER;
   BEGIN
     v_hdnl := DBMS_DATAPUMP.OPEN( 
       operation => 'EXPORT' , 
       job_mode  => 'SCHEMA' , 
       job_name  => null
     );
     DBMS_DATAPUMP.ADD_FILE( 
       handle    => v_hdnl, 
       filename  => 'sample.dmp'    , 
       directory => 'DATA_PUMP_DIR' , 
       filetype  => dbms_datapump.ku$_file_type_dump_file
     );
     DBMS_DATAPUMP.ADD_FILE( 
       handle    => v_hdnl           , 
       filename  => 'sample_exp.log' , 
       directory => 'DATA_PUMP_DIR'  , 
       filetype  => dbms_datapump.ku$_file_type_log_file
     );
     DBMS_DATAPUMP.METADATA_FILTER(
       v_hdnl              ,
       'SCHEMA_EXPR'       ,
       'IN (''SCHEMA_1'')'
     );
     DBMS_DATAPUMP.METADATA_FILTER(
       v_hdnl,
       'EXCLUDE_NAME_EXPR',
       q'[IN (SELECT NAME FROM sys.OBJ$ 
              WHERE TYPE# IN (66,67,74,79,59,62,46) 
              AND OWNER# IN 
                (SELECT USER# FROM SYS.USER$ 
                 WHERE NAME IN ('RDSADMIN','SYS','SYSTEM','RDS_DATAGUARD','RDSSEC')
                )
             )
       ]',
       'PROCOBJ'
     );
     DBMS_DATAPUMP.START_JOB(v_hdnl);
   END;
   /
   ```
**Note**  
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump job, see [ Monitoring job status](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-overview.html#GUID-E365D74E-12CD-495C-BA23-5A55F679C7E7) in the Oracle documentation. You can view the contents of the export log by using the `rdsadmin.rds_file_util.read_text_file` procedure. For more information, see [Reading files in a DB instance directory](Appendix.Oracle.CommonDBATasks.Misc.md#Appendix.Oracle.CommonDBATasks.ReadingFiles).

### Step 4: Create a database link to the target DB instance
<a name="Oracle.Procedural.Importing.DataPumpDBLink.Step4"></a>

Create a database link between your source DB instance and your target DB instance. Your local Oracle instance must have network connectivity to the DB instance in order to create a database link and to transfer your export dump file. 

Perform this step connected with the same user account as the previous step.

If you are creating a database link between two DB instances inside the same VPC or peered VPCs, the two DB instances should have a valid route between them. The security group of each DB instance must allow ingress to and egress from the other DB instance. The security group inbound and outbound rules can refer to security groups from the same VPC or a peered VPC. For more information, see [Adjusting database links for use with DB instances in a VPC](Appendix.Oracle.CommonDBATasks.DBLinks.md). 

The following command creates a database link named `to_rds` that connects to the Amazon RDS master user at the target DB instance. 

```
CREATE DATABASE LINK to_rds 
  CONNECT TO <master_user_account> IDENTIFIED BY <password>
  USING '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<dns or ip address of remote db>)
         (PORT=<listener port>))(CONNECT_DATA=(SID=<remote SID>)))';
```

### Step 5: Copy the exported dump file to the target DB instance using DBMS\$1FILE\$1TRANSFER
<a name="Oracle.Procedural.Importing.DataPumpDBLink.Step5"></a>

Use `DBMS_FILE_TRANSFER` to copy the dump file from the source database instance to the target DB instance. The following script copies a dump file named sample.dmp from the source instance to a target database link named *to\$1rds* (created in the previous step). 

```
BEGIN
  DBMS_FILE_TRANSFER.PUT_FILE(
    source_directory_object       => 'DATA_PUMP_DIR',
    source_file_name              => 'sample.dmp',
    destination_directory_object  => 'DATA_PUMP_DIR',
    destination_file_name         => 'sample_copied.dmp', 
    destination_database          => 'to_rds' );
END;
/
```

### Step 6: Import the data file to the target DB instance using DBMS\$1DATAPUMP
<a name="Oracle.Procedural.Importing.DataPumpDBLink.Step6"></a>

Use Oracle Data Pump to import the schema in the DB instance. Additional options such as METADATA\$1REMAP might be required. 

 Connect to the DB instance with the Amazon RDS master user account to perform the import. 

```
DECLARE
  v_hdnl NUMBER;
BEGIN
  v_hdnl := DBMS_DATAPUMP.OPEN( 
    operation => 'IMPORT', 
    job_mode  => 'SCHEMA', 
    job_name  => null);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'sample_copied.dmp',
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_dump_file );
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'sample_imp.log', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_log_file);
  DBMS_DATAPUMP.METADATA_FILTER(v_hdnl,'SCHEMA_EXPR','IN (''SCHEMA_1'')');
  DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/
```

**Note**  
Data Pump jobs are started asynchronously. For information about monitoring a Data Pump job, see [ Monitoring job status](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-overview.html#GUID-E365D74E-12CD-495C-BA23-5A55F679C7E7) in the Oracle documentation. You can view the contents of the import log by using the `rdsadmin.rds_file_util.read_text_file` procedure. For more information, see [Reading files in a DB instance directory](Appendix.Oracle.CommonDBATasks.Misc.md#Appendix.Oracle.CommonDBATasks.ReadingFiles).

You can verify the data import by viewing the user's tables on the DB instance. For example, the following query returns the number of tables for `schema_1`. 

```
SELECT COUNT(*) FROM DBA_TABLES WHERE OWNER='SCHEMA_1'; 
```

### Step 7: Clean up
<a name="Oracle.Procedural.Importing.DataPumpDBLink.Step7"></a>

After the data has been imported, you can delete the files that you don't want to keep. You can list the files in `DATA_PUMP_DIR` using the following command.

```
SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) ORDER BY MTIME;
```

To delete files in `DATA_PUMP_DIR` that you no longer require, use the following command. 

```
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','<file name>');
```

For example, the following command deletes the file named `"sample_copied.dmp"`. 

```
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','sample_copied.dmp'); 
```

# Importing using Oracle Export/Import
<a name="Oracle.Procedural.Importing.ExportImport"></a>

You might consider Oracle Export/Import utilities for migrations in the following conditions:
+ Your data size is small.
+ Data types such as binary float and double aren't required.

The import process creates the necessary schema objects. Thus, you don't need to run a script to create the objects beforehand. 

The easiest way to install the Oracle the export and import utilities is to install the Oracle Instant Client. To download the software, go to [https://www.oracle.com/database/technologies/instant-client.html](https://www.oracle.com/database/technologies/instant-client.html). For documentation, see [Instant Client for SQL\$1Loader, Export, and Import](https://docs.oracle.com/en/database/oracle/oracle-database/21/sutil/instant-client-sql-loader-export-import.html#GUID-FF1B6F75-09F5-4911-9317-9776FAD15965) in the *Oracle Database Utilities* manual.

**To export tables and then import them**

1. Export the tables from the source database using the `exp` command.

   The following command exports the tables named `tab1`, `tab2`, and `tab3`. The dump file is `exp_file.dmp`.

   ```
   exp cust_dba@ORCL FILE=exp_file.dmp TABLES=(tab1,tab2,tab3) LOG=exp_file.log
   ```

   The export creates a binary dump file that contains both the schema and data for the specified tables. 

1. Import the schema and data into a target database using the `imp` command.

   The following command imports the tables `tab1`, `tab2`, and `tab3` from dump file `exp_file.dmp`.

   ```
   imp cust_dba@targetdb FROMUSER=cust_schema TOUSER=cust_schema \  
   TABLES=(tab1,tab2,tab3) FILE=exp_file.dmp LOG=imp_file.log
   ```

Export and Import have other variations that might be better suited to your requirements. See the Oracle Database documentation for full details.

# Importing using Oracle SQL\$1Loader
<a name="Oracle.Procedural.Importing.SQLLoader"></a>

You might consider Oracle SQL\$1Loader for large databases that contain a limited number of objects. Because the process of exporting from a source database and loading to a target database is specific to the schema, the following example creates the sample schema objects, exports from a source, and then loads the data into a target database. 

The easiest way to install Oracle SQL\$1Loader is to install the Oracle Instant Client. To download the software, go to [https://www.oracle.com/database/technologies/instant-client.html](https://www.oracle.com/database/technologies/instant-client.html). For documentation, see [Instant Client for SQL\$1Loader, Export, and Import](https://docs.oracle.com/en/database/oracle/oracle-database/21/sutil/instant-client-sql-loader-export-import.html#GUID-FF1B6F75-09F5-4911-9317-9776FAD15965) in the *Oracle Database Utilities* manual.

**To import data using Oracle SQL\$1Loader**

1. Create a sample source table using the following SQL statement.

   ```
   CREATE TABLE customer_0 TABLESPACE users 
      AS (SELECT ROWNUM id, o.* 
          FROM   ALL_OBJECTS o, ALL_OBJECTS x 
          WHERE  ROWNUM <= 1000000);
   ```

1. On the target RDS for Oracle DB instance, create a destination table for loading the data. The clause `WHERE 1=2` ensures that you copy the structure of `ALL_OBJECTS`, but don't copy any rows.

   ```
   CREATE TABLE customer_1 TABLESPACE users 
     AS (SELECT 0 AS ID, OWNER, OBJECT_NAME, CREATED
         FROM   ALL_OBJECTS
         WHERE  1=2);
   ```

1. Export the data from the source database to a text file. The following example uses SQL\$1Plus. For your data, you will likely need to generate a script that does the export for all the objects in the database. 

   ```
   ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY/MM/DD HH24:MI:SS'
   
   SET LINESIZE 800 HEADING OFF FEEDBACK OFF ARRAY 5000 PAGESIZE 0
   SPOOL customer_0.out 
   SET MARKUP HTML PREFORMAT ON
   SET COLSEP ','
   
   SELECT id, owner, object_name, created 
   FROM   customer_0; 
   
   SPOOL OFF
   ```

1. Create a control file to describe the data. You might need to write a script to perform this step. 

   ```
   cat << EOF > sqlldr_1.ctl 
   load data
   infile customer_0.out
   into table customer_1
   APPEND
   fields terminated by "," optionally enclosed by '"'
   (
     id           POSITION(01:10)    INTEGER EXTERNAL,
     owner        POSITION(12:41)    CHAR,
     object_name  POSITION(43:72)    CHAR,
     created      POSITION(74:92)    date "YYYY/MM/DD HH24:MI:SS"
   )
   ```

   If needed, copy the files generated by the preceding code to a staging area, such as an Amazon EC2 instance.

1. Import the data using SQL\$1Loader with the appropriate user name and password for the target database. 

   ```
   sqlldr cust_dba@targetdb CONTROL=sqlldr_1.ctl BINDSIZE=10485760 READSIZE=10485760 ROWS=1000 
   ```

# Migrating with Oracle materialized views
<a name="Oracle.Procedural.Importing.Materialized"></a>

To migrate large datasets efficiently, you can use Oracle materialized view replication. With replication, you can keep the target tables synchronized with the source tables. Thus, you can switch over to Amazon RDS later, if needed. 

Before you can migrate using materialized views, make sure that you meet the following requirements:
+ Configure access from the target database to the source database. In the following example, access rules were enabled on the source database to allow the RDS for Oracle target database to connect to the source over SQL\$1Net. 
+ Create a database link from the RDS for Oracle DB instance to the source database.

**To migrate data using materialized views**

1. Create a user account on both source and RDS for Oracle target instances that can authenticate with the same password. The following example creates a user named `dblink_user`.

   ```
   CREATE USER dblink_user IDENTIFIED BY my-password
     DEFAULT TABLESPACE users
     TEMPORARY TABLESPACE temp;
      
   GRANT CREATE SESSION TO dblink_user;
   
   GRANT SELECT ANY TABLE TO dblink_user;
   
   GRANT SELECT ANY DICTIONARY TO dblink_user;
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

1. Create a database link from the RDS for Oracle target instance to the source instance using your newly created user.

   ```
   CREATE DATABASE LINK remote_site
     CONNECT TO dblink_user IDENTIFIED BY my-password
     USING '(description=(address=(protocol=tcp) (host=my-host) 
       (port=my-listener-port)) (connect_data=(sid=my-source-db-sid)))';
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

1. Test the link:

   ```
   SELECT * FROM V$INSTANCE@remote_site;
   ```

1. Create a sample table with primary key and materialized view log on the source instance.

   ```
   CREATE TABLE customer_0 TABLESPACE users 
     AS (SELECT ROWNUM id, o.* 
         FROM   ALL_OBJECTS o, ALL_OBJECTS x
         WHERE  ROWNUM <= 1000000);
   
   ALTER TABLE customer_0 ADD CONSTRAINT pk_customer_0 PRIMARY KEY (id) USING INDEX;
   
   CREATE MATERIALIZED VIEW LOG ON customer_0;
   ```

1. On the target RDS for Oracle DB instance, create a materialized view. 

   ```
   CREATE MATERIALIZED VIEW customer_0 
     BUILD IMMEDIATE REFRESH FAST 
     AS (SELECT * 
         FROM   cust_dba.customer_0@remote_site);
   ```

1. On the target RDS for Oracle DB instance, refresh the materialized view.

   ```
   EXEC DBMS_MVIEW.REFRESH('CUSTOMER_0', 'f');
   ```

1. Drop the materialized view and include the `PRESERVE TABLE` clause to retain the materialized view container table and its contents.

   ```
   DROP MATERIALIZED VIEW customer_0 PRESERVE TABLE;
   ```

   The retained table has the same name as the dropped materialized view.