

# Migrating data to Amazon RDS for Db2
<a name="db2-migrating-data-to-rds"></a>

You can migrate self-managed Db2 databases to Amazon RDS for Db2 by using either AWS or native Db2 tools.

For information about migrating from your Db2 database to Amazon RDS for Db2 using AWS services, see [Using AWS services to migrate data from Db2 to Amazon RDS for Db2](db2-migration-approaches.md).

For information about migrating from your Db2 database to Amazon RDS for Db2 using native Db2 tools, see [Using native Db2 tools to migrate data from Db2 to Amazon RDS for Db2](db2-native-db2-tools.md).

# Using AWS services to migrate data from Db2 to Amazon RDS for Db2
<a name="db2-migration-approaches"></a>

In Amazon RDS, there are several ways you can migrate data from a Db2 database to Amazon RDS for Db2. You can perform a one-time migration of your Db2 database from Linux, AIX, or Windows environments to Amazon RDS for Db2. To minimize downtime, you can perform a near-zero downtime migration. You can migrate your data by saving it to Amazon S3 and loading it one table at a time into your Db2 database. You can also perform a synchronous migration through replication or use AWS Database Migration Service.

For one-time migrations for Linux-based Db2 databases, Amazon RDS only supports offline and online backups. Amazon RDS doesn't support incremental and Delta backups. For near-zero downtime migrations for Linux-based Db2 databases, Amazon RDS requires online backups. We recommend that you use online backups for near-zero downtime migrations and offline backups for migrations that can handle downtime.

**Topics**
+ [Migrating from Linux to Linux for Amazon RDS for Db2](db2-one-time-migration-linux.md)
+ [Migrating from Linux to Linux with near-zero downtime for Amazon RDS for Db2](db2-near-zero-downtime-migration.md)
+ [Migrating synchronously from Linux to Linux for Amazon RDS for Db2](db2-synchronous-migration-linux.md)
+ [Migrating from AIX or Windows to Linux for Amazon RDS for Db2](db2-one-time-migration-aix-windows-linux.md)
+ [Migrating Db2 data through Amazon S3 to Amazon RDS for Db2](db2-migration-load-from-s3.md)
+ [Migrating to Amazon RDS for Db2 with AWS Database Migration Service (AWS DMS)](db2-migration-amazon-dms.md)

# Migrating from Linux to Linux for Amazon RDS for Db2
<a name="db2-one-time-migration-linux"></a>

With this migration approach, you back up your self-managed Db2 database to an Amazon S3 bucket. Then, you use Amazon RDS stored procedures to restore your Db2 database to an Amazon RDS for Db2 DB instance. For more information about using Amazon S3, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md).

Backup and restore for RDS for Db2 follows the IBM Db2 supported upgrade paths and restrictions. For more information, see [Supported upgrade paths for Db2 servers](https://www.ibm.com/docs/en/db2/11.5?topic=servers-supported-upgrade-paths-db2) and [Upgrade restrictions for Db2 servers](https://www.ibm.com/docs/en/db2/11.5?topic=servers-upgrade-restrictions) in the IBM Db2 documentation.

**Topics**
+ [Limitations and recommendations for using native restore](#db2-linux-migration-limitations)
+ [Backing up your database to Amazon S3](#db2-linux-backing-up-database)
+ [Creating a default automatic storage group](#db2-linux-creating-auto-storage-group)
+ [Restoring your Db2 database](#db2-linux-restoring-db2-database)

## Limitations and recommendations for using native restore
<a name="db2-linux-migration-limitations"></a>

The following limitations and recommendations apply to using native restore: 
+ Amazon RDS only supports migrating on-premises versions of Db2 that match supported RDS for Db2 versions. For more information about the supported versions, see [Upgrade management for Amazon RDS Db2 instances](Db2.Concepts.VersionMgmt.Supported.md).
+ Amazon RDS only supports offline and online backups for native restore. Amazon RDS doesn't support incremental or Delta backups.
+ You can't restore from an Amazon S3 bucket in an AWS Region that is different from the Region where your RDS for Db2 DB instance is located. 
+ Amazon S3 limits the size of files that are uploaded to an Amazon S3 bucket to 5 TB. If your database backup file exceeds 5 TB, then split the backup file into smaller files.
+ Amazon RDS doesn't support non-fenced external routines, incremental restores, or Delta restores.
+ You can't restore from an encrypted source database, but you can restore to an encrypted Amazon RDS DB instance.

The restoration process differs depending on your configuration.

If you set `USE_STREAMING_RESTORE` to `TRUE`, Amazon RDS directly streams your backup from your S3 bucket during restoration. Streaming significantly reduces storage requirements. You only need to provision storage space equal to or greater than either the size of the backup or the size of the original database, whichever is larger.

If you set `USE_STREAMING_RESTORE` to `FALSE`, Amazon RDS first downloads the backup to your RDS for Db2 DB instance and then extracts the backup. Extraction requires additional storage space. You must provision storage space equal to or greater than the sum of the size of the backup plus the size of the original database.

The maximum size of the restored database equals the maximum supported database size minus any space required for temporary storage during the restoration process.

## Backing up your database to Amazon S3
<a name="db2-linux-backing-up-database"></a>

To back up your database on Amazon S3, you need the following AWS components:
+ *An Amazon S3 bucket to store your backup files*: Upload any backup files that you want to migrate to Amazon RDS. We recommend that you use offline backups for migrations that can handle downtime. If you already have an S3 bucket, you can use that bucket. If you don't have an S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
**Note**  
If your database is large and would take a long time to transfer to an S3 bucket, you can order an AWS Snow Family device and ask AWS to perform the backup. After you copy your files to the device and return it to the Snow Family team, the team transfers your backed-up images to your S3 bucket. For more information, see the [AWS Snow Family documentation](https://docs.aws.amazon.com/snowball/).
+ *An IAM role to access the S3 bucket*: If you already have an IAM role, you can use that role. If you don't have a role, see [Step 2: Create an IAM role and attach your IAM policy](db2-s3-integration.md#db2-creating-iam-role). 
+ *An IAM policy with trust relationships and permissions attached to your IAM role*: For more information, see [Step 1: Create an IAM policy](db2-s3-integration.md#db2-creating-iam-policy).
+ *The IAM role added to your RDS for Db2 DB instance*: For more information, see [Step 3: Add your IAM role to your RDS for Db2 DB instance](db2-s3-integration.md#db2-adding-iam-role).

## Creating a default automatic storage group
<a name="db2-linux-creating-auto-storage-group"></a>

Your source database must have a default automatic storage group. If your database doesn't have a default automatic storage group, you must create one.

**To create a default automatic storage group**

1. Connect to your source database. In the following example, replace *source\$1database* with the name of your database.

   ```
   db2 connect to source_database 
   ```

1. Create an automatic storage group and set it as the default. In the following example, replace *storage\$1path* with the absolute path to where the storage group is located.

   ```
   db2 "create stogroup IBMSTOGROUP ON storage_path set as default"
   ```

1. Terminate backend processes.

   ```
   db2 terminate
   ```

1. Deactivate the database and stop all database services. In the following example, replace *source\$1database* with the name of the database that you created the storage group for.

   ```
   db2 deactivate db source_database
   ```

1. Back up the database. In the following example, replace *source\$1database* with the name of the database that you created the storage group for. Replace *file\$1system\$1path* with the absolute path to where you want to back up the database.

   ```
   db2 backup database source_database to file_system_path 
   ```

## Restoring your Db2 database
<a name="db2-linux-restoring-db2-database"></a>

After you back up your database on Amazon S3 and create an automatic storage group, you are ready to restore your Db2 database to your RDS for Db2 DB instance.

**To restore your Db2 database from your Amazon S3 bucket to your RDS for Db2 DB instance**

1. Connect to your RDS for Db2 DB instance. For more information, see [Connecting to your Db2 DB instance](USER_ConnectToDb2DBInstance.md).

1. (Optional) To ensure that your database is configured with the optimal settings, check the values for the following parameters by calling [rdsadmin.show\$1configuration](db2-sp-managing-databases.md#db2-sp-show-configuration):
   + `RESTORE_DATABASE_NUM_BUFFERS`
   + `RESTORE_DATABASE_PARALLELISM`
   + `RESTORE_DATABASE_NUM_MULTI_PATHS`
   + `USE_STREAMING_RESTORE`

   Use [rdsadmin.set\$1configuration](db2-sp-managing-databases.md#db2-sp-set-configuration) to modify these values as needed. Properly configuring these parameters can significantly improve performance when restoring databases with large volumes of data. For most migration scenarios, we recommend setting `USE_STREAMING_RESTORE` to `TRUE` because it reduces storage requirements and can improve restoration speed.

1. Restore your database by calling `rdsadmin.restore_database`. For more information, see [rdsadmin.restore\$1database](db2-sp-managing-databases.md#db2-sp-restore-database). 

# Migrating from Linux to Linux with near-zero downtime for Amazon RDS for Db2
<a name="db2-near-zero-downtime-migration"></a>

With this migration approach, you migrate a Linux-based Db2 database from one self-managed Db2 database (source) to Amazon RDS for Db2. This approach results in minimal to no outage or downtime for the application or users. This approach backs up your database and restores it with log replay, which helps prevent disruptions to ongoing operations and provides high availability of your database. 

To achieve near-zero downtime migration, RDS for Db2 implements restore with log replay. This approach takes a backup of your self-managed Linux-based Db2 database and restores it on the RDS for Db2 server. With Amazon RDS stored procedures, you then apply subsequent transaction logs to bring the database up to date. 

**Topics**
+ [Limitations and recommendations for near-zero downtime migration](#db2-near-zero-downtime-migration-limitations)
+ [Backing up your database to Amazon S3](#db2-near-zero-downtime-backing-up-database)
+ [Creating a default automatic storage group](#db2-near-zero-migration-creating-auto-storage-group)
+ [Migrating your Db2 database](#db2-migrating-db2-database)

## Limitations and recommendations for near-zero downtime migration
<a name="db2-near-zero-downtime-migration-limitations"></a>

The following limitations and recommendations apply to using near-zero downtime migration:
+ Amazon RDS requires an online backup for near-zero downtime migration. This is because Amazon RDS keeps your database in a rollforward pending state as you upload your archived transaction logs. For more information, see [Migrating your Db2 database](#db2-migrating-db2-database). 
+ You can't restore from an Amazon S3 bucket in an AWS Region that is different from the Region where your RDS for Db2 DB instance is located. 
+ Amazon S3 limits the size of files uploaded to an S3 bucket to 5 TB. If your database backup file exceeds 5 TB, then split the backup file into smaller files.
+ Amazon RDS doesn't support non-fenced external routines, incremental restores, or Delta restores.
+ You can't restore from an encrypted source database, but you can restore to an encrypted Amazon RDS DB instance.

The restoration process differs depending on your configuration.

If you set `USE_STREAMING_RESTORE` to `TRUE`, Amazon RDS directly streams your backup from your S3 bucket during restoration. Streaming significantly reduces storage requirements. You only need to provision storage space equal to or greater than either the size of the backup or the size of the original database, whichever is larger.

If you set `USE_STREAMING_RESTORE` to `FALSE`, Amazon RDS first downloads the backup to your RDS for Db2 DB instance and then extracts the backup. Extraction requires additional storage space. You must provision storage space equal to or greater than the sum of the size of the backup plus the size of the original database.

The maximum size of the restored database equals the maximum supported database size minus any space required for temporary storage during the restoration process. 

## Backing up your database to Amazon S3
<a name="db2-near-zero-downtime-backing-up-database"></a>

To back up your database on Amazon S3, you need the following AWS components:
+ *An Amazon S3 bucket to store your backup files*: Upload any backup files that you want to migrate to Amazon RDS. Amazon RDS requires an online backup for near-zero downtime migration. If you already have an S3 bucket, you can use that bucket. If you don't have an S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
**Note**  
If your database is large and would take a long time to transfer to an S3 bucket, you can order an AWS Snow Family device and ask AWS to perform the backup. After you copy your files to the device and return it to the Snow Family team, the team transfers your backed-up images to your S3 bucket. For more information, see the [AWS Snow Family documentation](https://docs.aws.amazon.com/snowball/).
+ *An IAM role to access the S3 bucket*: If you already have an AWS Identity and Access Management (IAM) role, you can use that role. If you don't have a role, see [Step 2: Create an IAM role and attach your IAM policy](db2-s3-integration.md#db2-creating-iam-role). 
+ *An IAM policy with trust relationships and permissions attached to your IAM role*: For more information, see [Step 1: Create an IAM policy](db2-s3-integration.md#db2-creating-iam-policy).
+ *The IAM role added to your RDS for Db2 DB instance*: For more information, see [Step 3: Add your IAM role to your RDS for Db2 DB instance](db2-s3-integration.md#db2-adding-iam-role).

## Creating a default automatic storage group
<a name="db2-near-zero-migration-creating-auto-storage-group"></a>

Your source database must have a default automatic storage group. If your database doesn't have a default automatic storage group, you must create one.

**To create a default automatic storage group**

1. Connect to your source database. In the following example, replace *source\$1database* with the name of your database.

   ```
   db2 connect to source_database 
   ```

1. Create an automatic storage group and set it as the default. In the following example, replace *storage\$1path* with the absolute path to where the storage group is located.

   ```
   db2 "create stogroup IBMSTOGROUP ON storage_path set as default"
   ```

1. Terminate backend processes.

   ```
   db2 terminate
   ```

## Migrating your Db2 database
<a name="db2-migrating-db2-database"></a>

After you set up for near-zero downtime migration, you are ready to migrate your Db2 database from your Amazon S3 bucket to your RDS for Db2 DB instance.

**To perform a near-zero downtime migration of backup files from your Amazon S3 bucket to your RDS for Db2 DB instance**

1. Perform an online backup of your source database. For more information, see [BACKUP DATABASE command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-backup-database) in the IBM Db2 documentation.

1. Copy the backup of your database to an Amazon S3 bucket. For information about using Amazon S3, see the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html).

1. Connect to the `rdsadmin` server with the *master\$1username* and *master\$1password* for your RDS for Db2 DB instance.

   ```
   db2 connect to rdsadmin user master_username using master_password
   ```

1. (Optional) To ensure that your database is configured with the optimal settings, check the values for the following parameters by calling [rdsadmin.show\$1configuration](db2-sp-managing-databases.md#db2-sp-show-configuration):
   + `RESTORE_DATABASE_NUM_BUFFERS`
   + `RESTORE_DATABASE_PARALLELISM`
   + `RESTORE_DATABASE_NUM_MULTI_PATHS`
   + `USE_STREAMING_RESTORE`

   Use [rdsadmin.set\$1configuration](db2-sp-managing-databases.md#db2-sp-set-configuration) to modify these values as needed. Properly configuring these parameters can significantly improve performance when restoring databases with large volumes of data. For most migration scenarios, we recommend setting `USE_STREAMING_RESTORE` to `TRUE` because it reduces storage requirements and can improve restoration speed.

1. Restore the backup on the RDS for Db2 server by calling `rdsadmin.restore_database`. Set `backup_type` to `ONLINE`. For more information, see [rdsadmin.restore\$1database](db2-sp-managing-databases.md#db2-sp-restore-database).

1. Copy your archive logs from your source server to your S3 bucket. For more information, see [Archive logging](https://www.ibm.com/docs/en/db2/11.5?topic=logging-archive) in the IBM Db2 documentation.

1. Apply archive logs as many times as needed by calling `rdsadmin.rollforward_database`. Set `complete_rollforward` to `FALSE` to keep the database in a `ROLL-FORWARD PENDING` state. For more information, see [rdsadmin.rollforward\$1database](db2-sp-managing-databases.md#db2-sp-rollforward-database).

1. After you apply all of the archive logs, bring the database online by calling `rdsadmin.complete_rollforward`. For more information, see [rdsadmin.complete\$1rollforward](db2-sp-managing-databases.md#db2-sp-complete-rollforward).

1. Switch application connections to the RDS for Db2 server by either updating your application endpoints for the database or by updating the DNS endpoints to redirect traffic to the RDS for Db2 server. You can also use the Db2 automatic client reroute feature on your self-managed Db2 database with the RDS for Db2 database endpoint. For more information, see [Automatic client reroute description and setup](https://www.ibm.com/docs/en/db2/11.5?topic=reroute-configuring-automatic-client) in the IBM Db2 documentation.

1. (Optional) Shut down your source database.

# Migrating synchronously from Linux to Linux for Amazon RDS for Db2
<a name="db2-synchronous-migration-linux"></a>

With this migration approach, you set up replication between your self-managed Db2 database and your Amazon RDS for Db2 DB instance. Changes made to the self-managed database replicates to the RDS for Db2 DB instance in near real-time. This approach can provide continuous availability and minimize downtime during the migration process.

# Migrating from AIX or Windows to Linux for Amazon RDS for Db2
<a name="db2-one-time-migration-aix-windows-linux"></a>

With this migration approach, you use native Db2 tools to back up your self-managed Db2 database to an Amazon S3 bucket. Native Db2 tools include the `export` utility, the `db2move` system command, or the `db2look` system command. Your Db2 database can either be self-managed or in Amazon Elastic Compute Cloud (Amazon EC2). You can move data from your AIX or Windows system to your Amazon S3 bucket. Then, use a Db2 client to load data directly from the S3 bucket to your Amazon RDS for Db2 database. Downtime depends on the size of your database. For more information about using Amazon S3, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md).

**To migrate your Db2 database to RDS for Db2**

1. Prepare to back up your database. Configure sufficient storage amount to hold the backup on your self-managed Db2 system.

1. Back up your database.

   1. Run the [db2look system command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-db2look-db2-statistics-ddl-extraction-tool) to extract the data definition language (DDL) file for all objects.

   1. Run either the [Db2 export utility](https://www.ibm.com/docs/en/db2/11.5?topic=utility-exporting-data), the [db2move system command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-db2move-database-movement-tool), or a [CREATE EXTERNAL TABLE statement](https://www.ibm.com/docs/en/db2/11.5?topic=statements-create-table-external) to unload the Db2 table data to storage on your Db2 system.

1. Move your backup to an Amazon S3 bucket. For more information, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md). 
**Note**  
If your database is large and would take a long time to transfer to an S3 bucket, you can order an AWS Snow Family device and ask AWS to perform the backup. After you copy your files to the device and return it to the Snow Family team, the team transfers your backed-up images to your S3 bucket. For more information, see the [AWS Snow Family documentation](https://docs.aws.amazon.com/snowball/).

1. Use a Db2 client to load data directly from your S3 bucket to your RDS for Db2 database. For more information, see [Migrating with Amazon S3](db2-migration-load-from-s3.md).

# Migrating Db2 data through Amazon S3 to Amazon RDS for Db2
<a name="db2-migration-load-from-s3"></a>

With this migration approach, you first save data from a single table into a data file that you place in an Amazon S3 bucket. Then, you use the [LOAD command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-load) to load the data from that data file into a table in your Amazon RDS for Db2 database. For more information about using Amazon S3, see [Integrating an Amazon RDS for Db2 DB instance with Amazon S3](db2-s3-integration.md).

**Topics**
+ [Saving your data to Amazon S3](#db2-migration-load-from-s3-saving-data-file)
+ [Loading your data into RDS for Db2 tables](#db2-migration-load-from-s3-into-db-table)

## Saving your data to Amazon S3
<a name="db2-migration-load-from-s3-saving-data-file"></a>

To save data from a single table to Amazon S3, use a database utility to extract the data from your database management system (DBMS) into a CSV file. Then, upload the data file to Amazon S3.

For storing data files on Amazon S3, you need the following AWS components:
+ *An Amazon S3 bucket to store your backup files*: If you already have an S3 bucket, you can use that bucket. If you don't have an S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*.
+ *An IAM role to access the S3 bucket*: If you already have an IAM role, you can use that role. If you don't have a role, see [Step 2: Create an IAM role and attach your IAM policy](db2-s3-integration.md#db2-creating-iam-role). 
+ *An IAM policy with trust relationships and permissions attached to your IAM role*: For more information, see [Step 1: Create an IAM policy](db2-s3-integration.md#db2-creating-iam-policy).
+ *The IAM role added to your RDS for Db2 DB instance*: For more information, see [Step 3: Add your IAM role to your RDS for Db2 DB instance](db2-s3-integration.md#db2-adding-iam-role).

## Loading your data into RDS for Db2 tables
<a name="db2-migration-load-from-s3-into-db-table"></a>

After you save your data files to Amazon S3, you can load the data from these files into individual tables on your RDS for Db2 DB instance.

**To load your Db2 table data into your RDS for Db2 DB database table**

1. Connect to the `rdsadmin` database using the master username and master password for your RDS for Db2 DB instance. In the following example, replace *master\$1username* and *master\$1password* with your own information.

   ```
   db2 connect to rdsadmin user master_username using master_password
   ```

1. Catalog a storage access alias that points to the Amazon S3 bucket where your saved files are stored. Take note of the name of this alias for use in the next step. You only need to perform this step once if you plan to load multiple tables from data files stored in the same Amazon S3 bucket.

   The following example catalogs an alias named *my\$1s3\$1alias* that grants a user named *jorge\$1souza* access to a bucket named *amzn-s3-demo-bucket*.

   ```
   db2 "call rdsadmin.catalog_storage_access(?, 'my_s3_alias', 'amzn-s3-demo-bucket', 'USER', 'jorge_souza')"
   ```

   For more information about this stored procedure, See [rdsadmin.catalog\$1storage\$1access](db2-sp-managing-storage-access.md#db2-sp-catalog-storage-access).

1. Run the `LOAD` command using the storage access alias that points to your Amazon S3 bucket. 
**Note**  
If the `LOAD` command returns an error, then you might need to create a VPC gateway endpoint for Amazon S3 and add outbound rules to the security group. For more information, see [File I/O error](db2-troubleshooting.md#db2-file-input-output-error).

   The following example loads data from a data file named *my\$1s3\$1datafile.csv* into a table named *my\$1db2\$1table*. The example assumes that the data file is in the Amazon S3 bucket that the alias named *my\$1s3\$1alias* points to.

   ```
   db2 "load from db2remote://my_s3_alias//my_s3_datafile.csv of DEL insert into my_db2_table";
   ```

   The following example loads LOBs from a data file named *my\$1table1\$1export.ixf* into a table named *my\$1db2\$1table*. The example assumes that the data file is in the Amazon S3 bucket that the alias named *my\$1s3\$1alias* points to.

   ```
   db2 "call sysproc.admin_cmd('load from "db2remote://my_s3_alias//my_table1_export.ixf" of ixf
           lobs from "db2remote://my_s3_alias//" xml from "db2remote://my_s3_alias//"
           modified by lobsinfile implicitlyhiddeninclude identityoverride generatedoverride periodoverride transactionidoverride
           messages on server
           replace into "my_schema"."my_db2_table"
                                  nonrecoverable
           indexing mode incremental allow no access')"
   ```

   Repeat this step for each data file in the Amazon S3 bucket that you want to load into a table in your RDS for Db2 DB instance.

   For more information about the `LOAD` command, see [LOAD command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-load).

# Migrating to Amazon RDS for Db2 with AWS Database Migration Service (AWS DMS)
<a name="db2-migration-amazon-dms"></a>

You can use AWS DMS for one-time migrations and then synchronize from Db2 on Linux, Unix (such as AIX), and Windows to Amazon RDS for Db2. For more information, see [What is AWS Database Migration Service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html).

# Using native Db2 tools to migrate data from Db2 to Amazon RDS for Db2
<a name="db2-native-db2-tools"></a>

You can use several native Db2 tools, utilities, and commands to move data directly from a Db2 database to an Amazon RDS for Db2 database. To use these native Db2 tools, you must be able to connect your client machine to an RDS for Db2 DB instance. For more information, see [Connecting a client machine to an Amazon RDS for Db2 DB instance](db2-connecting-client-rds.md).

**Note**  
Another way to move your data is to first save it to an Amazon S3 bucket, and then use the `LOAD` command to transfer that data into a table in your RDS for Db2 database. This method provides the best performance when migrating a large amount of data because of good network connectivity between RDS for Db2 and S3. For more information, see [Migrating Db2 data through Amazon S3 to Amazon RDS for Db2](db2-migration-load-from-s3.md).


| Tool name | Use case | Limitations | 
| --- | --- | --- | 
|  [db2look](db2-native-db2-tools-db2look.md)  |  Copying metadata from a self-managed Db2 database to an RDS for Db2 database.  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/db2-native-db2-tools.html)  | 
|  [IMPORT command](db2-native-db2-tools-import.md)  |  Migrating small tables and tables with large objects (LOBs) from a client machine to the RDS for Db2 DB instance.  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/db2-native-db2-tools.html)  | 
|  [INGEST utility](db2-native-db2-tools-ingest.md)   |  Continually streaming data from files and pipes *without* large objects (LOBs) on the client machine to the RDS for Db2 DB instance. Supports `INSERT` and `MERGE` operations.   |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/db2-native-db2-tools.html)  | 
|  [INSERT command](db2-native-db2-tools-insert.md)  |  Copying data in small tables from a self-managed Db2 database to an RDS for Db2 database.  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/db2-native-db2-tools.html)  | 
|  [LOAD CLIENT command](db2-native-db2-tools-load.md)  |  Migrating small tables *without *large objects (LOBs) from a client machine to the RDS for Db2 DB instance.  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/db2-native-db2-tools.html)  | 

# Connecting a client machine to an Amazon RDS for Db2 DB instance
<a name="db2-connecting-client-rds"></a>

To use any of the native Db2 tools to move data from a Db2 database to an Amazon RDS for Db2 database, you must first connect your client machine to an RDS for Db2 DB instance.

The client machine can be any of the following:
+ An Amazon Elastic Compute Cloud (Amazon EC2) instance on Linux, Windows, or macOS. This instance should be in the same virtual private cloud (VPC) as your RDS for Db2 DB instance, AWS Cloud9, or AWS CloudShell.
+ A self-managed Db2 instance in an Amazon EC2 instance. The instances should be in the same VPC.
+ A self-managed Db2 instance in an Amazon EC2 instance. The instances can be in different VPCs if you enabled VPC peering. For more information, see [Create a VPC peering connection](https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html) in the *Amazon Virtual Private Cloud VPC Peering Guide*.
+ A local machine running Linux, Windows, or macOS in a self-managed environment. You must either have public connectivity to RDS for Db2 or enable VPN connectivity between self-managed Db2 instances and AWS.

To connect your client machine to your RDS for Db2 DB instance, log in to your client machine with IBM Db2 Data Management Console. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md) and [IBM Db2 Data Management Console](db2-connecting-with-ibm-data-management-console.md). 

You can use AWS Database Migration Service (AWS DMS) to run queries against the database, run an SQL execution plan, and monitor the database. For more information, see [What is AWS Database Migration Service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) in the *AWS Database Migration Service User Guide*.

After you successfully connect your client machine to your RDS for Db2 DB instance, you are ready to use any native Db2 tool to copy data. For more information, see [Using native Db2 tools to migrate data from Db2 to Amazon RDS for Db2](db2-native-db2-tools.md).

# Copying database metadata from Db2 to Amazon RDS for Db2 with db2look
<a name="db2-native-db2-tools-db2look"></a>

`db2look` is a native Db2 tool that extracts data definition language (DDL) files, objects, authorizations, configurations, WLM, and database layouts. You can use `db2look` to copy database metadata from a self-managed Db2 database to an Amazon RDS for Db2 database. For more information, see [Mimicking databases using db2look](https://www.ibm.com/docs/en/db2/11.5?topic=tools-db2look) in the IBM Db2 documentation.

**To copy the database metadata**

1. Run the `db2look` tool on your self-managed Db2 system to extract the DDL file. In the following example, replace *database\$1name* with the name of your Db2 database.

   ```
   db2look -d database_name -e -l -a -f -wlm -cor -createdb -printdbcfg -o db2look.sql
   ```

1. If your client machine has access to the source (self-managed Db2) database and the RDS for Db2 DB instance, you can create the `db2look.sql` file on the client machine by directly attaching to the remote instance. Then catalog the remote self-managed Db2 instance.

   1. Catalog the node. In the following example, replace *dns\$1ip\$1address* and *port* with the DNS name or the IP address and the port number of the self-managed Db2 database.

      ```
      db2 catalog tcpip node srcnode REMOTE dns_ip_address server port
      ```

   1. Catalog the database. In the following example, replace *source\$1database\$1name* and *source\$1database\$1alias* with the name of the self-managed Db2 database and the alias that you want to use for this database.

      ```
      db2 catalog database source_database_name as source_database_alias at node srcnode \ 
          authentication server_encrypt
      ```

   1. Attach to the source database. In the following example, replace *source\$1database\$1alias*, *user\$1id*, and *user\$1password* with the alias that you created in the previous step and the user ID and password for the self-managed Db2 database.

      ```
      db2look -d source_database_alias -i user_id -w user_password -e -l -a -f -wlm \
          -cor -createdb -printdbcfg -o db2look.sql
      ```

1. If you can't access the remote self-managed Db2 database from the client machine, copy the `db2look.sql` file to the client machine. Then catalog the RDS for Db2 DB instance. 

   1. Catalog the node. In the following example, replace *dns\$1ip\$1address* and *port* with the DNS name or the IP address and the port number of the RDS for Db2 DB instance.

      ```
      db2 catalog tcpip node remnode REMOTE dns_ip_address server port
      ```

   1. Catalog the database. In the following example, replace *rds\$1database\$1name* and *rds\$1database\$1alias* with the name of the RDS for Db2 database and the alias that you want to use for this database.

      ```
      db2 catalog database rds_database_name as rds_database_alias at node remnode \ 
          authentication server_encrypt
      ```

   1. Catalog the admin database that manages RDS for Db2. You can't use this database to store any data.

      ```
      db2 catalog database rdsadmin as rdsadmin at node remnode authentication server_encrypt
      ```

1. Create buffer pools and tablespaces. The administrator doesn't have privileges to create buffer pools or tablespaces. However, you can use Amazon RDS stored procedures to create them.

   1. Find the names and definitions of the buffer pools and tablespaces in the `db2look.sql` file.

   1. Connect to Amazon RDS using the master username and master password for your RDS for Db2 DB instance. In the following example, replace *master\$1username* and *master\$1password* with your own information.

      ```
      db2 connect to rdsadmin user master_username using master_password
      ```

   1. Create a buffer pool by calling `rdsadmin.create_bufferpool`. For more information, see [rdsadmin.create\$1bufferpool](db2-sp-managing-buffer-pools.md#db2-sp-create-buffer-pool).

      ```
      db2 "call rdsadmin.create_bufferpool(
          'database_name', 
          'buffer_pool_name', 
          buffer_pool_size, 
          'immediate', 
          'automatic', 
          page_size, 
          number_block_pages, 
          block_size)"
      ```

   1. Create a tablespace by calling `rdsadmin.create_tablespace`. For more information, see [rdsadmin.create\$1tablespace](db2-sp-managing-tablespaces.md#db2-sp-create-tablespace).

      ```
      db2 "call rdsadmin.create_tablespace(
          'database_name', 
          'tablespace_name',
          'buffer_pool_name', 
          tablespace_initial_size, 
          tablespace_increase_size, 
          'tablespace_type')"
      ```

   1. Repeat steps c or d for each additional buffer pool or tablespace that you want to add.

   1. Terminate your connection.

      ```
      db2 terminate
      ```

1. Create tables and objects.

   1. Connect to your RDS for Db2 database using the master username and master password for your RDS for Db2 DB instance. In the following example, replace *rds\$1database\$1name*, *master\$1username*, and *master\$1password* with your own information.

      ```
      db2 connect to rds_database_name user master_username using master_password
      ```

   1. Run the `db2look.sql` file.

      ```
      db2 -tvf db2look.sql
      ```

   1. Terminate your connection.

      ```
      db2 terminate
      ```

# Importing data from a client machine to Amazon RDS for Db2 with the IMPORT command
<a name="db2-native-db2-tools-import"></a>

You can use the `IMPORT` command from a client machine to import your data into the Amazon RDS for Db2 server. 

**Important**  
The `IMPORT` command method is useful for migrating small tables and tables that include large objects (LOBs). The `IMPORT` command is slower than the `LOAD` utility because of the `INSERT` and `DELETE` logging operations. If your network bandwidth between the client machine and RDS for Db2 is limited, we recommend that you use a different migration approach. For more information, see [Using native Db2 tools to migrate data from Db2 to Amazon RDS for Db2](db2-native-db2-tools.md).

**To import data into the RDS for Db2 server**

1. Log in to your client machine with IBM Db2 Data Management Console. For more information, see [Connecting to your Amazon RDS for Db2 DB instance with IBM Db2 Data Management Console](db2-connecting-with-ibm-data-management-console.md).

1. Catalog the RDS for Db2 database on the client machine.

   1. Catalog the node. In the following example, replace *dns\$1ip\$1address* and *port* with the DNS name or the IP address and the port number of the self-managed Db2 database.

      ```
      db2 catalog tcpip node srcnode REMOTE dns_ip_address server port
      ```

   1. Catalog the database. In the following example, replace *source\$1database\$1name* and *source\$1database\$1alias* with the name of the self-managed Db2 database and the alias that you want to use for this database.

      ```
      db2 catalog database source_database_name as source_database_alias at node srcnode \
          authentication server_encrypt
      ```

1. Attach to the source database. In the following example, replace *source\$1database\$1alias*, *user\$1id*, and *user\$1password* with the alias you created in the previous step and the user ID and password for the self-managed Db2 database.

   ```
   db2look -d source_database_alias -i user_id -w user_password -e -l -a -f -wlm \
       -cor -createdb -printdbcfg -o db2look.sql
   ```

1. Generate the data file by using the` EXPORT` command on your self-managed Db2 system. In the following example, replace *directory* with the directory on your client machine where your data file exists. Replace *file\$1name* and *table\$1name* with the name of the data file and the name of the table. 

   ```
   db2 "export to /directory/file_name.txt of del lobs to /directory/lobs/ \
       modified by coldel\| select * from table_name"
   ```

1. Connect to your RDS for Db2 database using the master username and master password for your RDS for Db2 DB instance. In the following example, replace **rds\$1database\$1alias**, *master\$1username,* and *master\$1password* with your own information.

   ```
   db2 connect to rds_database_alias user master_username using master_password
   ```

1. Use the `IMPORT` command to import data from a file on the client machine into the remote RDS for Db2 database. For more information, see [IMPORT command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-import) in the IBM Db2 documentation. In the following example, replace *directory* and *file\$1name* with the directory on your client machine where your data file exists and the name of the data file. Replace *SCHEMA\$1NAME* and *TABLE\$1NAME* with the name of your schema and table. 

   ```
   db2 "IMPORT from /directory/file_name.tbl OF DEL LOBS FROM /directory/lobs/ \
       modified by coldel\| replace into SCHEMA_NAME.TABLE_NAME"
   ```

1. Terminate your connection.

   ```
   db2 terminate
   ```

# Importing data from a client machine to Amazon RDS for Db2 with the LOAD command
<a name="db2-native-db2-tools-load"></a>

You can use the `LOAD CLIENT` command to load data from a file on a client machine to the RDS for Db2 server. Because no SSH connectivity exists to the RDS for Db2 server, you can use the `LOAD CLIENT` command on either your self-managed Db2 server or your Db2 client machine.

**Important**  
The `LOAD CLIENT` command method is useful for migrating small tables. If your network bandwidth between the client and RDS for Db2 is limited, we recommend that you use a different migration approach. For more information, see the [Using native Db2 tools to migrate data from Db2 to Amazon RDS for Db2](db2-native-db2-tools.md).  
If your data file includes references to large object file names, then the `LOAD` command won't work because large objects (LOBs) need to reside on the Db2 server. If you try to load LOBs from the client machine to the RDS for Db2 server, you will receive an SQL3025N error. Use the [IMPORT command](db2-native-db2-tools-import.md) instead.

**To load data to the RDS for Db2 server**

1. Log in to your client machine with IBM Db2 Data Management Console. For more information, see [Connecting to your Amazon RDS for Db2 DB instance with IBM Db2 Data Management Console](db2-connecting-with-ibm-data-management-console.md).

1. Catalog the RDS for Db2 database on the client machine.

   1. Catalog the node. In the following example, replace *dns\$1ip\$1address* and *port* with the DNS name or the IP address and the port number of the self-managed Db2 database.

      ```
      db2 catalog tcpip node srcnode REMOTE dns_ip_address server port
      ```

   1. Catalog the database. In the following example, replace *source\$1database\$1name* and *source\$1database\$1alias* with the name of the self-managed Db2 database and the alias that you want to use for this database.

      ```
      db2 catalog database source_database_name as source_database_alias at node srcnode \
          authentication server_encrypt
      ```

1. Attach to the source database. In the following example, replace *source\$1database\$1alias*, *user\$1id*, and *user\$1password* with the alias you that created in the previous step and the user ID and password for the self-managed Db2 database. 

   ```
   db2look -d source_database_alias -i user_id -w user_password -e -l -a -f -wlm \
       -cor -createdb -printdbcfg -o db2look.sql
   ```

1. Generate the data file by using the `EXPORT` command on your self-managed Db2 system. In the following example, replace *directory* with the directory on your client machine where your data file exists. Replace *file\$1name* and *TABLE\$1NAME* with the name of the data file and the name of the table. 

   ```
   db2 "export to /directory/file_name.txt of del modified by coldel\| \
       select * from TPCH.TABLE_NAME"
   ```

1. Connect to your RDS for Db2 database using the master username and master password for your RDS for Db2 DB instance. In the following example, replace **rds\$1database\$1alias**, *master\$1username*, and *master\$1password* with your own information.

   ```
   db2 connect to rds_database_alias user master_username using master_password
   ```

1. Use the `LOAD` command to load data from a file on the client machine to the remote RDS for Db2 database. For more information, see [LOAD command](https://www.ibm.com/docs/en/db2/11.5?topic=commands-load) in the IBM Db2 documentation. In the following example, replace *directory* with the directory on your client machine where your data file exists. Replace *file\$1name* and *TABLE\$1NAME* with the name of the data file and the name of the table. 

   ```
   db2 "LOAD CLIENT from /directory/file_name.txt \
       modified by coldel\| replace into TPCH.TABLE_NAME \
       nonrecoverable without prompting"
   ```

1. Terminate your connection.

   ```
   db2 terminate
   ```

# Importing data from Db2 to Amazon RDS for Db2 with the INSERT command
<a name="db2-native-db2-tools-insert"></a>

You can use the `INSERT` command from a self-managed Db2 server to insert your data into an Amazon RDS for Db2 database. With this migration approach, you use a nickname for the remote RDS for Db2 DB instance. Your self-managed Db2 database (source) must be able to connect to the RDS for Db2 database (target).

**Important**  
The `INSERT` command method is useful for migrating small tables. If your network bandwidth between your self-managed Db2 database and RDS for Db2 database is limited, we recommend that you use a different migration approach. For more information, see [Using native Db2 tools to migrate data from Db2 to Amazon RDS for Db2](db2-native-db2-tools.md).

**To copy data from a self-managed Db2 database to an RDS for Db2 database**

1. Catalog the RDS for Db2 DB instance on the self-managed Db2 instance. 

   1. Catalog the node. In the following example, replace *dns\$1ip\$1address* and *port* with the DNS name or the IP address and the port number of the self-managed Db2 database.

      ```
      db2 catalog tcpip node remnode REMOTE dns_ip_address SERVER port
      ```

   1. Catalog the database. In the following example, replace *rds\$1database\$1name* with the name of the database on your RDS for Db2 DB instance.

      ```
      db2 catalog database rds_database_name as remdb at node remnode \
          authentication server_encrypt
      ```

1. Enable federation on the self-managed Db2 instance. In the following example, replace *source\$1database\$1name* with the name of your database on the self-managed Db2 instance.

   ```
   db2 update dbm cfg using FEDERATED YES source_database_name
   ```

1. Create tables on the RDS for Db2 DB instance.

   1. Catalog the node. In the following example, replace *dns\$1ip\$1address* and *port* with the DNS name or the IP address and the port number of the self-managed Db2 database.

      ```
      db2 catalog tcpip node srcnode REMOTE dns_ip_address server port
      ```

   1. Catalog the database. In the following example, replace *source\$1database\$1name* and *source\$1database\$1alias* with the name of the self-managed Db2 database and the alias that you want to use for this database.

      ```
      db2 catalog database source_database_name as source_database_alias at node srcnode \
          authentication server_encrypt
      ```

1. Attach to the source database. In the following example, replace *source\$1database\$1alias*, *user\$1id*, and *user\$1password* with the alias that you created in the previous step and the user ID and password for the self-managed Db2 database. 

   ```
   db2look -d source_database_alias -i user_id -w user_password -e -l -a -f -wlm \
       -cor -createdb -printdbcfg -o db2look.sql
   ```

1. Set up federation, and create a nickname for the RDS for Db2 database table on the self-managed Db2 instance.

   1. Connect to your local database. In the following example, replace *source\$1database\$1name* with the name of the database on your self-managed Db2 instance.

      ```
      db2 connect to source_database_name
      ```

   1.  Create a wrapper to access Db2 data sources.

      ```
      db2 create wrapper drda
      ```

   1. Define a data source on a federated database. In the following example, replace *admin* and *admin\$1password* with your credentials for your self-managed Db2 instance. Replace *rds\$1database\$1name* with the name of the database on your RDS for Db2 DB instance.

      ```
      db2 "create server rdsdb2 type DB2/LUW version '11.5.9.0' \
          wrapper drda authorization "admin" password "admin_password" \
          options( dbname 'rds_database_name', node 'remnode')"
      ```

   1. Map the users on the two databases. In the following example, replace *master\$1username* and *master\$1password* with your credentials for your RDS for Db2 DB instance.

      ```
      db2 "create user mapping for user server rdsdb2 \
          options (REMOTE_AUTHID 'master_username', REMOTE_PASSWORD 'master_password')"
      ```

   1. Verify the connection to the RDS for Db2 server. 

      ```
      db2 set passthru rdsdb2
      ```

   1. Create a nickname for the table in the remote RDS for Db2 database. In the following example, replace *NICKNAME* and *TABLE\$1NAME* with a nickname for the table and the name of the table.

      ```
      db2 create nickname REMOTE.NICKNAME for RDSDB2.TABLE_NAME.NICKNAME
      ```

1. Insert data into the table in the remote RDS for Db2 database. Use the nickname in a `select` statement on the local table in the self-managed Db2 instance. In the following example, replace *NICKNAME* and *TABLE\$1NAME* with a nickname for the table and the name of the table.

   ```
   db2 "INSERT into REMOTE.NICKNAME select * from RDS2DB2.TABLE_NAME.NICKNAME"
   ```

# Importing data from Db2 to Amazon RDS for Db2 with the INGEST utility
<a name="db2-native-db2-tools-ingest"></a>

You can use the `INGEST` utility to continually stream data from files and pipes on a client machine to a target Amazon RDS for Db2 DB instance. The `INGEST` utility supports `INSERT` and `MERGE` operations. For more information, see [Ingest utility](https://www.ibm.com/docs/en/db2/11.1?topic=reference-ingest-utility) in the IBM Db2 documentation.

Because the `INGEST` utility supports nicknames, you can use the utility to transfer data from your self-managed Db2 database to an RDS for Db2 database. This approach works as long as network connectivity exists between the two databases. 

**Important**  
The `INGEST` utility doesn't support large objects (LOBs). Use the [`IMPORT` command](db2-native-db2-tools-import.md) instead. 

To use the `RESTARTABLE` feature of the `INGEST` utility, run the following command on the RDS for Db2 database.

```
db2 "call sysproc.sysinstallobjects(‘INGEST’,‘C’,NULL,NULL)"
```